22 Commits

Author SHA1 Message Date
WorkClub Automation 65fea5d48b Introduced Openspec to project 2026-03-18 12:07:34 +01:00
MasterMito 3cf7c3a221 Merge pull request 'feat: restrict admin access to club operations and rollout test environment' (#4) from epic/admin_rework_second_try into main
CI Pipeline / Backend Build & Test (push) Successful in 48s
CI Pipeline / Frontend Lint, Test & Build (push) Successful in 32s
CI Pipeline / Infrastructure Validation (push) Successful in 3s
Reviewed-on: #4
2026-03-18 09:16:58 +01:00
WorkClub Automation d30895c94a fix: resolve frontend lint errors and cleanup types
CI Pipeline / Backend Build & Test (pull_request) Successful in 53s
CI Pipeline / Frontend Lint, Test & Build (pull_request) Successful in 36s
CI Pipeline / Infrastructure Validation (pull_request) Successful in 4s
2026-03-18 09:15:02 +01:00
WorkClub Automation 821459966c feat: restrict admin access to club operations and rollout test environment
CI Pipeline / Backend Build & Test (pull_request) Successful in 53s
CI Pipeline / Frontend Lint, Test & Build (pull_request) Failing after 16s
CI Pipeline / Infrastructure Validation (pull_request) Successful in 3s
2026-03-18 09:08:45 +01:00
WorkClub Automation 9cb80e4517 fix(auth): restore keycloak sign-in for NodePort access
CI Pipeline / Backend Build & Test (push) Successful in 58s
CI Pipeline / Frontend Lint, Test & Build (push) Successful in 28s
CI Pipeline / Infrastructure Validation (push) Successful in 4s
Trust external host for Auth.js, provide missing frontend auth env/secrets, and submit a proper CSRF-backed sign-in POST so browser login reaches Keycloak reliably.
2026-03-13 06:52:18 +01:00
WorkClub Automation d4f09295be feat(k8s): expose workclub services via LAN NodePorts
Expose frontend, API, and Keycloak on stable NodePorts and align app/keycloak external URLs for local-network browser access.
2026-03-13 06:33:50 +01:00
WorkClub Automation eaa163afa4 fix(k8s): stabilize keycloak rollout and align CD deploy manifests
Update Keycloak probe/realm import behavior and authority config so auth services start reliably on the dev cluster, while keeping CD deployment steps aligned with the actual Kubernetes overlay behavior.
2026-03-13 06:25:07 +01:00
WorkClub Automation 7272358746 fix(k8s): extreme probe timeouts for RPi and final Keycloak 26 admin fix
CI Pipeline / Backend Build & Test (push) Successful in 51s
CI Pipeline / Frontend Lint, Test & Build (push) Successful in 28s
CI Pipeline / Infrastructure Validation (push) Successful in 3s
2026-03-10 22:22:36 +01:00
WorkClub Automation 9b1ceb1fb4 fix(k8s): fix image names, keycloak 26 envs, and bump resource limits for RPi
CI Pipeline / Backend Build & Test (push) Successful in 52s
CI Pipeline / Frontend Lint, Test & Build (push) Successful in 42s
CI Pipeline / Infrastructure Validation (push) Successful in 5s
2026-03-10 22:16:31 +01:00
WorkClub Automation 90ae752652 fix(k8s): enable keycloak health endpoints and increase probe delays
CI Pipeline / Backend Build & Test (push) Successful in 1m2s
CI Pipeline / Frontend Lint, Test & Build (push) Successful in 29s
CI Pipeline / Infrastructure Validation (push) Successful in 3s
2026-03-10 22:07:02 +01:00
WorkClub Automation 3c41f0e40c fix(k8s): use args instead of command for keycloak to allow default entrypoint
CI Pipeline / Backend Build & Test (push) Successful in 1m19s
CI Pipeline / Frontend Lint, Test & Build (push) Successful in 26s
CI Pipeline / Infrastructure Validation (push) Successful in 4s
2026-03-10 22:02:48 +01:00
WorkClub Automation fce8b28114 fix(cd): force delete postgres statefulset to allow storage changes
CI Pipeline / Backend Build & Test (push) Successful in 57s
CI Pipeline / Frontend Lint, Test & Build (push) Successful in 34s
CI Pipeline / Infrastructure Validation (push) Successful in 5s
2026-03-10 21:54:26 +01:00
WorkClub Automation b204f6aa32 fix(k8s): register secrets and postgres-patch in dev kustomization
CI Pipeline / Frontend Lint, Test & Build (push) Has been cancelled
CI Pipeline / Infrastructure Validation (push) Has been cancelled
CI Pipeline / Backend Build & Test (push) Has been cancelled
2026-03-10 21:42:31 +01:00
WorkClub Automation 0a4d99b65b fix(k8s): add dev secrets and use emptyDir for postgres on storage-less cluster
CI Pipeline / Frontend Lint, Test & Build (push) Has been cancelled
CI Pipeline / Infrastructure Validation (push) Has been cancelled
CI Pipeline / Backend Build & Test (push) Has been cancelled
2026-03-10 21:18:19 +01:00
WorkClub Automation c9841d6cfc fix(cd): ensure workclub-dev namespace exists before deployment
CI Pipeline / Backend Build & Test (push) Successful in 59s
CI Pipeline / Frontend Lint, Test & Build (push) Successful in 26s
CI Pipeline / Infrastructure Validation (push) Successful in 4s
2026-03-10 20:40:29 +01:00
WorkClub Automation 641a6d0af0 fix(cd): use dynamic KUBECONFIG path and enhanced context diagnostics
CI Pipeline / Frontend Lint, Test & Build (push) Has been cancelled
CI Pipeline / Infrastructure Validation (push) Has been cancelled
CI Pipeline / Backend Build & Test (push) Has been cancelled
2026-03-10 20:38:21 +01:00
WorkClub Automation b1c351e936 fix(cd): use printf for robust KUBECONFIG writing and add diagnostics
CI Pipeline / Frontend Lint, Test & Build (push) Has been cancelled
CI Pipeline / Infrastructure Validation (push) Has been cancelled
CI Pipeline / Backend Build & Test (push) Has been cancelled
2026-03-10 20:35:12 +01:00
WorkClub Automation df625f3b3a Next try fixing the deployment pipeline
CI Pipeline / Frontend Lint, Test & Build (push) Has been cancelled
CI Pipeline / Infrastructure Validation (push) Has been cancelled
CI Pipeline / Backend Build & Test (push) Has been cancelled
2026-03-10 20:32:48 +01:00
WorkClub Automation b028c06636 Fix for Deployment, install kubectl
CI Pipeline / Frontend Lint, Test & Build (push) Has been cancelled
CI Pipeline / Infrastructure Validation (push) Has been cancelled
CI Pipeline / Backend Build & Test (push) Has been cancelled
2026-03-10 20:29:28 +01:00
WorkClub Automation 9f4bea36fe fix(cd): use robust manual kubectl setup to avoid base64 truncated input error
CI Pipeline / Backend Build & Test (push) Failing after 13s
CI Pipeline / Frontend Lint, Test & Build (push) Successful in 27s
CI Pipeline / Infrastructure Validation (push) Successful in 4s
2026-03-10 20:25:10 +01:00
WorkClub Automation c5b3fbe4cb Added Kubernetes Cluster Deployment
CI Pipeline / Backend Build & Test (push) Failing after 55s
CI Pipeline / Frontend Lint, Test & Build (push) Failing after 33s
CI Pipeline / Infrastructure Validation (push) Successful in 9s
2026-03-10 19:58:55 +01:00
WorkClub Automation 4f6d0ae6df chore: remove old screenshot images
CI Pipeline / Backend Build & Test (push) Successful in 1m1s
CI Pipeline / Frontend Lint, Test & Build (push) Successful in 29s
CI Pipeline / Infrastructure Validation (push) Successful in 4s
2026-03-09 17:31:51 +01:00
52 changed files with 2360 additions and 246 deletions
+97
View File
@@ -0,0 +1,97 @@
name: CD Deployment - Kubernetes
on:
workflow_run:
workflows: ["CD Bootstrap - Release Image Publish"]
types: [completed]
branches: [main, develop]
workflow_dispatch:
inputs:
image_tag:
description: 'Image tag to deploy (e.g., latest, dev)'
required: true
default: 'dev'
type: string
jobs:
deploy:
name: Deploy to Kubernetes
runs-on: ubuntu-latest
if: ${{ github.event.workflow_run.conclusion == 'success' || github.event_name == 'workflow_dispatch' }}
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install kubectl
run: |
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
- name: Install Kustomize
run: |
curl -Lo kustomize.tar.gz https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize%2Fv5.4.1/kustomize_v5.4.1_linux_amd64.tar.gz
tar -xzf kustomize.tar.gz
chmod +x kustomize
sudo mv kustomize /usr/local/bin/
- name: Set Image Tag
run: |
IMAGE_TAG="${{ github.event.inputs.image_tag }}"
if [[ -z "$IMAGE_TAG" ]]; then
IMAGE_TAG="dev" # Default for auto-trigger
fi
echo "IMAGE_TAG=$IMAGE_TAG" >> $GITHUB_ENV
- name: Kustomize Edit Image Tag
working-directory: ./infra/k8s/overlays/dev
run: |
kustomize edit set image workclub-api=192.168.241.13:8080/workclub-api:$IMAGE_TAG
kustomize edit set image workclub-frontend=192.168.241.13:8080/workclub-frontend:$IMAGE_TAG
- name: Deploy to Kubernetes
run: |
set -euo pipefail
export KUBECONFIG=$HOME/.kube/config
mkdir -p $HOME/.kube
if echo "${{ secrets.KUBECONFIG }}" | grep -q "apiVersion"; then
echo "Detected plain text KUBECONFIG"
printf '%s' "${{ secrets.KUBECONFIG }}" > $KUBECONFIG
else
echo "Detected base64 KUBECONFIG"
# Handle potential newlines/wrapping in the secret
printf '%s' "${{ secrets.KUBECONFIG }}" | base64 -d > $KUBECONFIG
fi
chmod 600 $KUBECONFIG
kubectl --kubeconfig="$KUBECONFIG" config view >/dev/null
# Diagnostics
echo "Kubeconfig path: $KUBECONFIG"
echo "Kubeconfig size: $(wc -c < $KUBECONFIG) bytes"
echo "Available contexts:"
kubectl --kubeconfig="$KUBECONFIG" config get-contexts
if ! grep -q "current-context" $KUBECONFIG; then
echo "Warning: current-context missing, attempting to fix..."
FIRST_CONTEXT=$(kubectl --kubeconfig="$KUBECONFIG" config get-contexts -o name | head -n 1)
if [ -n "$FIRST_CONTEXT" ]; then
kubectl --kubeconfig="$KUBECONFIG" config use-context "$FIRST_CONTEXT"
fi
fi
echo "Current context: $(kubectl --kubeconfig="$KUBECONFIG" config current-context)"
# Ensure target namespace exists
kubectl --kubeconfig="$KUBECONFIG" create namespace workclub-dev --dry-run=client -o yaml | kubectl --kubeconfig="$KUBECONFIG" apply -f -
# Apply manifests (non-destructive by default; avoid DB state churn)
kubectl --kubeconfig="$KUBECONFIG" config view --minify # Verification of context
kustomize build --load-restrictor LoadRestrictionsNone infra/k8s/overlays/dev | kubectl --kubeconfig="$KUBECONFIG" apply -f -
# Rollout verification
kubectl --kubeconfig="$KUBECONFIG" rollout status statefulset/workclub-postgres -n workclub-dev --timeout=300s
kubectl --kubeconfig="$KUBECONFIG" rollout status deployment/workclub-keycloak -n workclub-dev --timeout=600s
kubectl --kubeconfig="$KUBECONFIG" rollout status deployment/workclub-api -n workclub-dev --timeout=300s
kubectl --kubeconfig="$KUBECONFIG" rollout status deployment/workclub-frontend -n workclub-dev --timeout=300s
+149
View File
@@ -0,0 +1,149 @@
---
description: Implement tasks from an OpenSpec change (Experimental)
---
Implement tasks from an OpenSpec change.
**Input**: Optionally specify a change name (e.g., `/opsx-apply add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **Select the change**
If a name is provided, use it. Otherwise:
- Infer from conversation context if the user mentioned a change
- Auto-select if only one active change exists
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
Always announce: "Using change: <name>" and how to override (e.g., `/opsx-apply <other>`).
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
3. **Get apply instructions**
```bash
openspec instructions apply --change "<name>" --json
```
This returns:
- Context file paths (varies by schema)
- Progress (total, complete, remaining)
- Task list with status
- Dynamic instruction based on current state
**Handle states:**
- If `state: "blocked"` (missing artifacts): show message, suggest using `/opsx-continue`
- If `state: "all_done"`: congratulate, suggest archive
- Otherwise: proceed to implementation
4. **Read context files**
Read the files listed in `contextFiles` from the apply instructions output.
The files depend on the schema being used:
- **spec-driven**: proposal, specs, design, tasks
- Other schemas: follow the contextFiles from CLI output
5. **Show current progress**
Display:
- Schema being used
- Progress: "N/M tasks complete"
- Remaining tasks overview
- Dynamic instruction from CLI
6. **Implement tasks (loop until done or blocked)**
For each pending task:
- Show which task is being worked on
- Make the code changes required
- Keep changes minimal and focused
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
- Continue to next task
**Pause if:**
- Task is unclear → ask for clarification
- Implementation reveals a design issue → suggest updating artifacts
- Error or blocker encountered → report and wait for guidance
- User interrupts
7. **On completion or pause, show status**
Display:
- Tasks completed this session
- Overall progress: "N/M tasks complete"
- If all done: suggest archive
- If paused: explain why and wait for guidance
**Output During Implementation**
```
## Implementing: <change-name> (schema: <schema-name>)
Working on task 3/7: <task description>
[...implementation happening...]
✓ Task complete
Working on task 4/7: <task description>
[...implementation happening...]
✓ Task complete
```
**Output On Completion**
```
## Implementation Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 7/7 tasks complete ✓
### Completed This Session
- [x] Task 1
- [x] Task 2
...
All tasks complete! You can archive this change with `/opsx-archive`.
```
**Output On Pause (Issue Encountered)**
```
## Implementation Paused
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 4/7 tasks complete
### Issue Encountered
<description of the issue>
**Options:**
1. <option 1>
2. <option 2>
3. Other approach
What would you like to do?
```
**Guardrails**
- Keep going through tasks until done or blocked
- Always read context files before starting (from the apply instructions output)
- If task is ambiguous, pause and ask before implementing
- If implementation reveals issues, pause and suggest artifact updates
- Keep code changes minimal and scoped to each task
- Update task checkbox immediately after completing each task
- Pause on errors, blockers, or unclear requirements - don't guess
- Use contextFiles from CLI output, don't assume specific file names
**Fluid Workflow Integration**
This skill supports the "actions on a change" model:
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly
+154
View File
@@ -0,0 +1,154 @@
---
description: Archive a completed change in the experimental workflow
---
Archive a completed change in the experimental workflow.
**Input**: Optionally specify a change name after `/opsx-archive` (e.g., `/opsx-archive add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show only active changes (not already archived).
Include the schema used for each change if available.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check artifact completion status**
Run `openspec status --change "<name>" --json` to check artifact completion.
Parse the JSON to understand:
- `schemaName`: The workflow being used
- `artifacts`: List of artifacts with their status (`done` or other)
**If any artifacts are not `done`:**
- Display warning listing incomplete artifacts
- Prompt user for confirmation to continue
- Proceed if user confirms
3. **Check task completion status**
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
**If incomplete tasks found:**
- Display warning showing count of incomplete tasks
- Prompt user for confirmation to continue
- Proceed if user confirms
**If no tasks file exists:** Proceed without task-related warning.
4. **Assess delta spec sync state**
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
**If delta specs exist:**
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
- Determine what changes would be applied (adds, modifications, removals, renames)
- Show a combined summary before prompting
**Prompt options:**
- If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, use Task tool (subagent_type: "general-purpose", prompt: "Use Skill tool to invoke openspec-sync-specs for change '<name>'. Delta spec analysis: <include the analyzed delta spec summary>"). Proceed to archive regardless of choice.
5. **Perform the archive**
Create the archive directory if it doesn't exist:
```bash
mkdir -p openspec/changes/archive
```
Generate target name using current date: `YYYY-MM-DD-<change-name>`
**Check if target already exists:**
- If yes: Fail with error, suggest renaming existing archive or using different date
- If no: Move the change directory to archive
```bash
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
6. **Display summary**
Show archive completion summary including:
- Change name
- Schema that was used
- Archive location
- Spec sync status (synced / sync skipped / no delta specs)
- Note about any warnings (incomplete artifacts/tasks)
**Output On Success**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** ✓ Synced to main specs
All artifacts complete. All tasks complete.
```
**Output On Success (No Delta Specs)**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** No delta specs
All artifacts complete. All tasks complete.
```
**Output On Success With Warnings**
```
## Archive Complete (with warnings)
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** Sync skipped (user chose to skip)
**Warnings:**
- Archived with 2 incomplete artifacts
- Archived with 3 incomplete tasks
- Delta spec sync was skipped (user chose to skip)
Review the archive if this was not intentional.
```
**Output On Error (Archive Exists)**
```
## Archive Failed
**Change:** <change-name>
**Target:** openspec/changes/archive/YYYY-MM-DD-<name>/
Target archive directory already exists.
**Options:**
1. Rename the existing archive
2. Delete the existing archive if it's a duplicate
3. Wait until a different date to archive
```
**Guardrails**
- Always prompt for change selection if not provided
- Use artifact graph (openspec status --json) for completion checking
- Don't block archive on warnings - just inform and confirm
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
- Show clear summary of what happened
- If sync is requested, use the Skill tool to invoke `openspec-sync-specs` (agent-driven)
- If delta specs exist, always run the sync assessment and show the combined summary before prompting
+170
View File
@@ -0,0 +1,170 @@
---
description: Enter explore mode - think through ideas, investigate problems, clarify requirements
---
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first and create a change proposal. You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
**Input**: The argument after `/opsx-explore` is whatever the user wants to think about. Could be:
- A vague idea: "real-time collaboration"
- A specific problem: "the auth system is getting unwieldy"
- A change name: "add-dark-mode" (to explore in context of that change)
- A comparison: "postgres vs sqlite for this"
- Nothing (just enter explore mode)
---
## The Stance
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
- **Adaptive** - Follow interesting threads, pivot when new information emerges
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
---
## What You Might Do
Depending on what the user brings, you might:
**Explore the problem space**
- Ask clarifying questions that emerge from what they said
- Challenge assumptions
- Reframe the problem
- Find analogies
**Investigate the codebase**
- Map existing architecture relevant to the discussion
- Find integration points
- Identify patterns already in use
- Surface hidden complexity
**Compare options**
- Brainstorm multiple approaches
- Build comparison tables
- Sketch tradeoffs
- Recommend a path (if asked)
**Visualize**
```
┌─────────────────────────────────────────┐
│ Use ASCII diagrams liberally │
├─────────────────────────────────────────┤
│ │
│ ┌────────┐ ┌────────┐ │
│ │ State │────────▶│ State │ │
│ │ A │ │ B │ │
│ └────────┘ └────────┘ │
│ │
│ System diagrams, state machines, │
│ data flows, architecture sketches, │
│ dependency graphs, comparison tables │
│ │
└─────────────────────────────────────────┘
```
**Surface risks and unknowns**
- Identify what could go wrong
- Find gaps in understanding
- Suggest spikes or investigations
---
## OpenSpec Awareness
You have full context of the OpenSpec system. Use it naturally, don't force it.
### Check for context
At the start, quickly check what exists:
```bash
openspec list --json
```
This tells you:
- If there are active changes
- Their names, schemas, and status
- What the user might be working on
If the user mentioned a specific change name, read its artifacts for context.
### When no change exists
Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create a proposal?"
- Or keep exploring - no pressure to formalize
### When a change exists
If the user mentions a change or you detect one is relevant:
1. **Read existing artifacts for context**
- `openspec/changes/<name>/proposal.md`
- `openspec/changes/<name>/design.md`
- `openspec/changes/<name>/tasks.md`
- etc.
2. **Reference them naturally in conversation**
- "Your design mentions using Redis, but we just realized SQLite fits better..."
- "The proposal scopes this to premium users, but we're now thinking everyone..."
3. **Offer to capture when decisions are made**
| Insight Type | Where to Capture |
|--------------|------------------|
| New requirement discovered | `specs/<capability>/spec.md` |
| Requirement changed | `specs/<capability>/spec.md` |
| Design decision made | `design.md` |
| Scope changed | `proposal.md` |
| New work identified | `tasks.md` |
| Assumption invalidated | Relevant artifact |
Example offers:
- "That's a design decision. Capture it in design.md?"
- "This is a new requirement. Add it to specs?"
- "This changes scope. Update the proposal?"
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
---
## What You Don't Have To Do
- Follow a script
- Ask the same questions every time
- Produce a specific artifact
- Reach a conclusion
- Stay on topic if a tangent is valuable
- Be brief (this is thinking time)
---
## Ending Discovery
There's no required ending. Discovery might:
- **Flow into a proposal**: "Ready to start? I can create a change proposal."
- **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime"
When things crystallize, you might offer a summary - but it's optional. Sometimes the thinking IS the value.
---
## Guardrails
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
- **Don't fake understanding** - If something is unclear, dig deeper
- **Don't rush** - Discovery is thinking time, not task time
- **Don't force structure** - Let patterns emerge naturally
- **Don't auto-capture** - Offer to save insights, don't just do it
- **Do visualize** - A good diagram is worth many paragraphs
- **Do explore the codebase** - Ground discussions in reality
- **Do question assumptions** - Including the user's and your own
+103
View File
@@ -0,0 +1,103 @@
---
description: Propose a new change - create it and generate all artifacts in one step
---
Propose a new change - create the change and generate all artifacts in one step.
I'll create a change with artifacts:
- proposal.md (what & why)
- design.md (how)
- tasks.md (implementation steps)
When ready to implement, run /opsx-apply
---
**Input**: The argument after `/opsx-propose` is the change name (kebab-case), OR a description of what the user wants to build.
**Steps**
1. **If no input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Create the change directory**
```bash
openspec new change "<name>"
```
This creates a scaffolded change at `openspec/changes/<name>/` with `.openspec.yaml`.
3. **Get the artifact build order**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to get:
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
- `artifacts`: list of all artifacts with their status and dependencies
4. **Create artifacts in sequence until apply-ready**
Use the **TodoWrite tool** to track progress through the artifacts.
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
a. **For each artifact that is `ready` (dependencies satisfied)**:
- Get instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- The instructions JSON includes:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance for this artifact type
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- Read any completed dependency files for context
- Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json`
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
- Stop when all `applyRequires` artifacts are done
c. **If an artifact requires user input** (unclear context):
- Use **AskUserQuestion tool** to clarify
- Then continue with creation
5. **Show final status**
```bash
openspec status --change "<name>"
```
**Output**
After completing all artifacts, summarize:
- Change name and location
- List of artifacts created with brief descriptions
- What's ready: "All artifacts created! Ready for implementation."
- Prompt: "Run `/opsx-apply` to start implementing."
**Artifact Creation Guidelines**
- Follow the `instruction` field from `openspec instructions` for each artifact type
- The schema defines what each artifact should contain - follow it
- Read dependency artifacts for context before creating new ones
- Use `template` as the structure for your output file - fill in its sections
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output
**Guardrails**
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
- Always read dependency artifacts before creating a new one
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
- If a change with that name already exists, ask if user wants to continue it or create a new one
- Verify each artifact file exists after writing before proceeding to next
@@ -0,0 +1,156 @@
---
name: openspec-apply-change
description: Implement tasks from an OpenSpec change. Use when the user wants to start implementing, continue implementation, or work through tasks.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.2.0"
---
Implement tasks from an OpenSpec change.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **Select the change**
If a name is provided, use it. Otherwise:
- Infer from conversation context if the user mentioned a change
- Auto-select if only one active change exists
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
Always announce: "Using change: <name>" and how to override (e.g., `/opsx-apply <other>`).
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
3. **Get apply instructions**
```bash
openspec instructions apply --change "<name>" --json
```
This returns:
- Context file paths (varies by schema - could be proposal/specs/design/tasks or spec/tests/implementation/docs)
- Progress (total, complete, remaining)
- Task list with status
- Dynamic instruction based on current state
**Handle states:**
- If `state: "blocked"` (missing artifacts): show message, suggest using openspec-continue-change
- If `state: "all_done"`: congratulate, suggest archive
- Otherwise: proceed to implementation
4. **Read context files**
Read the files listed in `contextFiles` from the apply instructions output.
The files depend on the schema being used:
- **spec-driven**: proposal, specs, design, tasks
- Other schemas: follow the contextFiles from CLI output
5. **Show current progress**
Display:
- Schema being used
- Progress: "N/M tasks complete"
- Remaining tasks overview
- Dynamic instruction from CLI
6. **Implement tasks (loop until done or blocked)**
For each pending task:
- Show which task is being worked on
- Make the code changes required
- Keep changes minimal and focused
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
- Continue to next task
**Pause if:**
- Task is unclear → ask for clarification
- Implementation reveals a design issue → suggest updating artifacts
- Error or blocker encountered → report and wait for guidance
- User interrupts
7. **On completion or pause, show status**
Display:
- Tasks completed this session
- Overall progress: "N/M tasks complete"
- If all done: suggest archive
- If paused: explain why and wait for guidance
**Output During Implementation**
```
## Implementing: <change-name> (schema: <schema-name>)
Working on task 3/7: <task description>
[...implementation happening...]
✓ Task complete
Working on task 4/7: <task description>
[...implementation happening...]
✓ Task complete
```
**Output On Completion**
```
## Implementation Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 7/7 tasks complete ✓
### Completed This Session
- [x] Task 1
- [x] Task 2
...
All tasks complete! Ready to archive this change.
```
**Output On Pause (Issue Encountered)**
```
## Implementation Paused
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 4/7 tasks complete
### Issue Encountered
<description of the issue>
**Options:**
1. <option 1>
2. <option 2>
3. Other approach
What would you like to do?
```
**Guardrails**
- Keep going through tasks until done or blocked
- Always read context files before starting (from the apply instructions output)
- If task is ambiguous, pause and ask before implementing
- If implementation reveals issues, pause and suggest artifact updates
- Keep code changes minimal and scoped to each task
- Update task checkbox immediately after completing each task
- Pause on errors, blockers, or unclear requirements - don't guess
- Use contextFiles from CLI output, don't assume specific file names
**Fluid Workflow Integration**
This skill supports the "actions on a change" model:
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly
@@ -0,0 +1,114 @@
---
name: openspec-archive-change
description: Archive a completed change in the experimental workflow. Use when the user wants to finalize and archive a change after implementation is complete.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.2.0"
---
Archive a completed change in the experimental workflow.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show only active changes (not already archived).
Include the schema used for each change if available.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check artifact completion status**
Run `openspec status --change "<name>" --json` to check artifact completion.
Parse the JSON to understand:
- `schemaName`: The workflow being used
- `artifacts`: List of artifacts with their status (`done` or other)
**If any artifacts are not `done`:**
- Display warning listing incomplete artifacts
- Use **AskUserQuestion tool** to confirm user wants to proceed
- Proceed if user confirms
3. **Check task completion status**
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
**If incomplete tasks found:**
- Display warning showing count of incomplete tasks
- Use **AskUserQuestion tool** to confirm user wants to proceed
- Proceed if user confirms
**If no tasks file exists:** Proceed without task-related warning.
4. **Assess delta spec sync state**
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
**If delta specs exist:**
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
- Determine what changes would be applied (adds, modifications, removals, renames)
- Show a combined summary before prompting
**Prompt options:**
- If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, use Task tool (subagent_type: "general-purpose", prompt: "Use Skill tool to invoke openspec-sync-specs for change '<name>'. Delta spec analysis: <include the analyzed delta spec summary>"). Proceed to archive regardless of choice.
5. **Perform the archive**
Create the archive directory if it doesn't exist:
```bash
mkdir -p openspec/changes/archive
```
Generate target name using current date: `YYYY-MM-DD-<change-name>`
**Check if target already exists:**
- If yes: Fail with error, suggest renaming existing archive or using different date
- If no: Move the change directory to archive
```bash
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
6. **Display summary**
Show archive completion summary including:
- Change name
- Schema that was used
- Archive location
- Whether specs were synced (if applicable)
- Note about any warnings (incomplete artifacts/tasks)
**Output On Success**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** ✓ Synced to main specs (or "No delta specs" or "Sync skipped")
All artifacts complete. All tasks complete.
```
**Guardrails**
- Always prompt for change selection if not provided
- Use artifact graph (openspec status --json) for completion checking
- Don't block archive on warnings - just inform and confirm
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
- Show clear summary of what happened
- If sync is requested, use openspec-sync-specs approach (agent-driven)
- If delta specs exist, always run the sync assessment and show the combined summary before prompting
+288
View File
@@ -0,0 +1,288 @@
---
name: openspec-explore
description: Enter explore mode - a thinking partner for exploring ideas, investigating problems, and clarifying requirements. Use when the user wants to think through something before or during a change.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.2.0"
---
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first and create a change proposal. You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
---
## The Stance
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
- **Adaptive** - Follow interesting threads, pivot when new information emerges
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
---
## What You Might Do
Depending on what the user brings, you might:
**Explore the problem space**
- Ask clarifying questions that emerge from what they said
- Challenge assumptions
- Reframe the problem
- Find analogies
**Investigate the codebase**
- Map existing architecture relevant to the discussion
- Find integration points
- Identify patterns already in use
- Surface hidden complexity
**Compare options**
- Brainstorm multiple approaches
- Build comparison tables
- Sketch tradeoffs
- Recommend a path (if asked)
**Visualize**
```
┌─────────────────────────────────────────┐
│ Use ASCII diagrams liberally │
├─────────────────────────────────────────┤
│ │
│ ┌────────┐ ┌────────┐ │
│ │ State │────────▶│ State │ │
│ │ A │ │ B │ │
│ └────────┘ └────────┘ │
│ │
│ System diagrams, state machines, │
│ data flows, architecture sketches, │
│ dependency graphs, comparison tables │
│ │
└─────────────────────────────────────────┘
```
**Surface risks and unknowns**
- Identify what could go wrong
- Find gaps in understanding
- Suggest spikes or investigations
---
## OpenSpec Awareness
You have full context of the OpenSpec system. Use it naturally, don't force it.
### Check for context
At the start, quickly check what exists:
```bash
openspec list --json
```
This tells you:
- If there are active changes
- Their names, schemas, and status
- What the user might be working on
### When no change exists
Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create a proposal?"
- Or keep exploring - no pressure to formalize
### When a change exists
If the user mentions a change or you detect one is relevant:
1. **Read existing artifacts for context**
- `openspec/changes/<name>/proposal.md`
- `openspec/changes/<name>/design.md`
- `openspec/changes/<name>/tasks.md`
- etc.
2. **Reference them naturally in conversation**
- "Your design mentions using Redis, but we just realized SQLite fits better..."
- "The proposal scopes this to premium users, but we're now thinking everyone..."
3. **Offer to capture when decisions are made**
| Insight Type | Where to Capture |
|--------------|------------------|
| New requirement discovered | `specs/<capability>/spec.md` |
| Requirement changed | `specs/<capability>/spec.md` |
| Design decision made | `design.md` |
| Scope changed | `proposal.md` |
| New work identified | `tasks.md` |
| Assumption invalidated | Relevant artifact |
Example offers:
- "That's a design decision. Capture it in design.md?"
- "This is a new requirement. Add it to specs?"
- "This changes scope. Update the proposal?"
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
---
## What You Don't Have To Do
- Follow a script
- Ask the same questions every time
- Produce a specific artifact
- Reach a conclusion
- Stay on topic if a tangent is valuable
- Be brief (this is thinking time)
---
## Handling Different Entry Points
**User brings a vague idea:**
```
User: I'm thinking about adding real-time collaboration
You: Real-time collab is a big space. Let me think about this...
COLLABORATION SPECTRUM
════════════════════════════════════════════
Awareness Coordination Sync
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Presence│ │Cursors │ │ CRDT │
│ "3 │ │ Multi │ │Conflict│
│online" │ │ select │ │ free │
└────────┘ └────────┘ └────────┘
│ │ │
trivial moderate complex
Where's your head at?
```
**User brings a specific problem:**
```
User: The auth system is a mess
You: [reads codebase]
┌─────────────────────────────────────────────┐
│ CURRENT AUTH FLOW │
└─────────────────────────────────────────────┘
┌─────────────┼─────────────┐
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐
│ Google │ │ GitHub │ │ Email │
│ OAuth │ │ OAuth │ │ Magic │
└────┬────┘ └────┬────┘ └────┬────┘
│ │ │
└─────────────┼─────────────┘
┌───────────┐
│ Session │
└─────┬─────┘
┌───────────┐
│ Perms │
└───────────┘
I see three tangles. Which one's burning?
```
**User is stuck mid-implementation:**
```
User: /opsx-explore add-auth-system
The OAuth integration is more complex than expected
You: [reads change artifacts]
You're on task 4: "Implement OAuth flow"
Let me trace what's involved...
[draws diagram, explores options, suggests paths]
Want to update the design to reflect this?
Or add a spike task to investigate?
```
**User wants to compare options:**
```
User: Should we use Postgres or SQLite?
You: Generic answer is boring. What's the context?
User: A CLI tool that tracks local dev environments
You: That changes everything.
┌─────────────────────────────────────────────────┐
│ CLI TOOL DATA STORAGE │
└─────────────────────────────────────────────────┘
Key constraints:
• No daemon running
• Must work offline
• Single user
SQLite Postgres
Deployment embedded ✓ needs server ✗
Offline yes ✓ no ✗
Single file yes ✓ no ✗
SQLite. Not even close.
Unless... is there a sync component?
```
---
## Ending Discovery
There's no required ending. Discovery might:
- **Flow into a proposal**: "Ready to start? I can create a change proposal."
- **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime"
When it feels like things are crystallizing, you might summarize:
```
## What We Figured Out
**The problem**: [crystallized understanding]
**The approach**: [if one emerged]
**Open questions**: [if any remain]
**Next steps** (if ready):
- Create a change proposal
- Keep exploring: just keep talking
```
But this summary is optional. Sometimes the thinking IS the value.
---
## Guardrails
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
- **Don't fake understanding** - If something is unclear, dig deeper
- **Don't rush** - Discovery is thinking time, not task time
- **Don't force structure** - Let patterns emerge naturally
- **Don't auto-capture** - Offer to save insights, don't just do it
- **Do visualize** - A good diagram is worth many paragraphs
- **Do explore the codebase** - Ground discussions in reality
- **Do question assumptions** - Including the user's and your own
+110
View File
@@ -0,0 +1,110 @@
---
name: openspec-propose
description: Propose a new change with all artifacts generated in one step. Use when the user wants to quickly describe what they want to build and get a complete proposal with design, specs, and tasks ready for implementation.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.2.0"
---
Propose a new change - create the change and generate all artifacts in one step.
I'll create a change with artifacts:
- proposal.md (what & why)
- design.md (how)
- tasks.md (implementation steps)
When ready to implement, run /opsx-apply
---
**Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build.
**Steps**
1. **If no clear input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Create the change directory**
```bash
openspec new change "<name>"
```
This creates a scaffolded change at `openspec/changes/<name>/` with `.openspec.yaml`.
3. **Get the artifact build order**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to get:
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
- `artifacts`: list of all artifacts with their status and dependencies
4. **Create artifacts in sequence until apply-ready**
Use the **TodoWrite tool** to track progress through the artifacts.
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
a. **For each artifact that is `ready` (dependencies satisfied)**:
- Get instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- The instructions JSON includes:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance for this artifact type
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- Read any completed dependency files for context
- Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json`
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
- Stop when all `applyRequires` artifacts are done
c. **If an artifact requires user input** (unclear context):
- Use **AskUserQuestion tool** to clarify
- Then continue with creation
5. **Show final status**
```bash
openspec status --change "<name>"
```
**Output**
After completing all artifacts, summarize:
- Change name and location
- List of artifacts created with brief descriptions
- What's ready: "All artifacts created! Ready for implementation."
- Prompt: "Run `/opsx-apply` or ask me to implement to start working on the tasks."
**Artifact Creation Guidelines**
- Follow the `instruction` field from `openspec instructions` for each artifact type
- The schema defines what each artifact should contain - follow it
- Read dependency artifacts for context before creating new ones
- Use `template` as the structure for your output file - fill in its sections
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output
**Guardrails**
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
- Always read dependency artifacts before creating a new one
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
- If a change with that name already exists, ask if user wants to continue it or create a new one
- Verify each artifact file exists after writing before proceeding to next
+55 -25
View File
@@ -12,6 +12,7 @@
> - Docker Compose for local development (hot reload, Keycloak, PostgreSQL) > - Docker Compose for local development (hot reload, Keycloak, PostgreSQL)
> - Kubernetes manifests (Kustomize base + dev overlay) > - Kubernetes manifests (Kustomize base + dev overlay)
> - Gitea CI pipeline (`.gitea/workflows/ci.yml`) for backend/frontend/infrastructure validation > - Gitea CI pipeline (`.gitea/workflows/ci.yml`) for backend/frontend/infrastructure validation
> - Gitea CD bootstrap + deployment pipelines (`.gitea/workflows/cd-bootstrap.yml`, `.gitea/workflows/cd-deploy.yml`)
> - Comprehensive TDD test suite (xUnit + Testcontainers, Vitest + RTL, Playwright E2E) > - Comprehensive TDD test suite (xUnit + Testcontainers, Vitest + RTL, Playwright E2E)
> - Seed data for development (2 clubs, 5 users, sample tasks + shifts) > - Seed data for development (2 clubs, 5 users, sample tasks + shifts)
> >
@@ -36,7 +37,7 @@ Build a multi-tenant internet application for managing work items over several m
- **Testing**: TDD approach (tests first). - **Testing**: TDD approach (tests first).
- **Notifications**: None for MVP. - **Notifications**: None for MVP.
- **CI extension**: Add Gitea-hosted CI pipeline for this repository. - **CI extension**: Add Gitea-hosted CI pipeline for this repository.
- **Pipeline scope**: CI-only (build/test/lint/manifest validation), no auto-deploy in this iteration. - **Pipeline scope (updated)**: CI + CD. CI handles build/test/lint/manifest validation; CD bootstrap publishes multi-arch images; CD deploy applies Kubernetes manifests.
**Research Findings**: **Research Findings**:
- **Finbuckle.MultiTenant**: ClaimStrategy + HeaderStrategy fallback is production-proven (fullstackhero/dotnet-starter-kit pattern). - **Finbuckle.MultiTenant**: ClaimStrategy + HeaderStrategy fallback is production-proven (fullstackhero/dotnet-starter-kit pattern).
@@ -73,6 +74,8 @@ Deliver a working multi-tenant club work management application where authentica
- `/docker-compose.yml` — Local dev stack (PostgreSQL, Keycloak, .NET API, Next.js) - `/docker-compose.yml` — Local dev stack (PostgreSQL, Keycloak, .NET API, Next.js)
- `/infra/k8s/` — Kustomize manifests (base + dev overlay) - `/infra/k8s/` — Kustomize manifests (base + dev overlay)
- `/.gitea/workflows/ci.yml` — Gitea Actions CI pipeline (parallel backend/frontend/infra checks) - `/.gitea/workflows/ci.yml` — Gitea Actions CI pipeline (parallel backend/frontend/infra checks)
- `/.gitea/workflows/cd-bootstrap.yml` — Gitea Actions CD bootstrap workflow (manual multi-arch image publish)
- `/.gitea/workflows/cd-deploy.yml` — Gitea Actions CD deployment workflow (Kubernetes deploy with Kustomize overlay)
- PostgreSQL database with RLS policies on all tenant-scoped tables - PostgreSQL database with RLS policies on all tenant-scoped tables
- Keycloak realm configuration with test users and club memberships - Keycloak realm configuration with test users and club memberships
- Seed data for development - Seed data for development
@@ -106,6 +109,8 @@ Deliver a working multi-tenant club work management application where authentica
- TDD: all backend features have tests BEFORE implementation - TDD: all backend features have tests BEFORE implementation
- Gitea-hosted CI pipeline for this repository (`code.hal9000.damnserver.com/MasterMito/work-club-manager`) - Gitea-hosted CI pipeline for this repository (`code.hal9000.damnserver.com/MasterMito/work-club-manager`)
- CI jobs run in parallel (backend, frontend, infrastructure validation) - CI jobs run in parallel (backend, frontend, infrastructure validation)
- Gitea-hosted CD bootstrap workflow for private registry image publication (`workclub-api`, `workclub-frontend`)
- Gitea-hosted CD deployment workflow for Kubernetes dev namespace rollout (`workclub-dev`)
### Must NOT Have (Guardrails) ### Must NOT Have (Guardrails)
- **No CQRS/MediatR** — Direct service injection from controllers/endpoints - **No CQRS/MediatR** — Direct service injection from controllers/endpoints
@@ -124,7 +129,7 @@ Deliver a working multi-tenant club work management application where authentica
- **No in-memory database for tests** — Real PostgreSQL via Testcontainers - **No in-memory database for tests** — Real PostgreSQL via Testcontainers
- **No billing, subscriptions, or analytics dashboard** - **No billing, subscriptions, or analytics dashboard**
- **No mobile app** - **No mobile app**
- **No automatic deployment in this CI extension** — CD remains out-of-scope for this append - **No single-step build-and-deploy coupling** — keep image bootstrap and cluster deployment as separate workflows
--- ---
@@ -193,11 +198,11 @@ Wave 5 (After Wave 4 — polish + Docker):
├── Task 24: Frontend Dockerfiles (dev + prod standalone) (depends: 18) [quick] ├── Task 24: Frontend Dockerfiles (dev + prod standalone) (depends: 18) [quick]
└── Task 25: Kustomize dev overlay + resource limits + health checks (depends: 6, 23, 24) [unspecified-high] └── Task 25: Kustomize dev overlay + resource limits + health checks (depends: 6, 23, 24) [unspecified-high]
Wave 6 (After Wave 5 — E2E + integration): Wave 6 (After Wave 5 — E2E + CI/CD integration):
├── Task 26: Playwright E2E tests — auth flow + club switching (depends: 21, 22) [unspecified-high] ├── Task 26: Playwright E2E tests — auth flow + club switching (depends: 21, 22) [unspecified-high]
├── Task 27: Playwright E2E tests — task management flow (depends: 19, 22) [unspecified-high] ├── Task 27: Playwright E2E tests — task management flow (depends: 19, 22) [unspecified-high]
├── Task 28: Playwright E2E tests — shift sign-up flow (depends: 20, 22) [unspecified-high] ├── Task 28: Playwright E2E tests — shift sign-up flow (depends: 20, 22) [unspecified-high]
└── Task 29: Gitea CI workflow (backend + frontend + infra checks) (depends: 12, 17, 23, 24, 25) [unspecified-high] └── Task 29: Gitea CI/CD workflows (CI checks + image bootstrap + Kubernetes deploy) (depends: 12, 17, 23, 24, 25) [unspecified-high]
Wave FINAL (After ALL tasks — independent review, 4 parallel): Wave FINAL (After ALL tasks — independent review, 4 parallel):
├── Task F1: Plan compliance audit (oracle) ├── Task F1: Plan compliance audit (oracle)
@@ -2525,34 +2530,37 @@ Max Concurrent: 6 (Wave 1)
- Files: `frontend/tests/e2e/shifts.spec.ts` - Files: `frontend/tests/e2e/shifts.spec.ts`
- Pre-commit: `bunx playwright test tests/e2e/shifts.spec.ts` - Pre-commit: `bunx playwright test tests/e2e/shifts.spec.ts`
- [x] 29. Gitea CI Pipeline — Backend + Frontend + Infra Validation - [x] 29. Gitea CI/CD PipelinesCI Validation + Image Bootstrap + Kubernetes Deploy
**What to do**: **What to do**:
- Create `.gitea/workflows/ci.yml` for repository `code.hal9000.damnserver.com/MasterMito/work-club-manager` - Maintain `.gitea/workflows/ci.yml` for repository `code.hal9000.damnserver.com/MasterMito/work-club-manager`
- Configure triggers: - Maintain `.gitea/workflows/cd-bootstrap.yml` for manual multi-arch image publishing to private registry
- Maintain `.gitea/workflows/cd-deploy.yml` for Kubernetes deployment using Kustomize overlays
- Configure CI triggers:
- `push` on `main` and feature branches - `push` on `main` and feature branches
- `pull_request` targeting `main` - `pull_request` targeting `main`
- `workflow_dispatch` for manual reruns - `workflow_dispatch` for manual reruns
- Structure pipeline into parallel jobs (fail-fast disabled so all diagnostics are visible): - CI workflow structure (parallel validation jobs):
- `backend-ci`: setup .NET 10 SDK, restore, build, run backend unit/integration tests - `backend-ci`: setup .NET 10 SDK, restore, build, run backend unit/integration tests
- `frontend-ci`: setup Bun, install deps, run lint, type-check, unit tests, production build - `frontend-ci`: setup Bun, install deps, run lint, type-check, unit tests, production build
- `infra-ci`: validate Docker Compose and Kustomize manifests - `infra-ci`: validate Docker Compose and Kustomize manifests
- Add path filters so docs-only changes skip heavy jobs when possible - CD bootstrap workflow behavior:
- Add dependency caching: - Manual trigger with `image_tag` + build flags
- NuGet cache keyed by `**/*.csproj` + lock/context - Buildx multi-arch build (`linux/amd64,linux/arm64`) for `workclub-api` and `workclub-frontend`
- Bun cache keyed by `bun.lockb` - Push image tags to `192.168.241.13:8080` and emit task-31/task-32/task-33 evidence artifacts
- Add artifact upload on failure: - CD deploy workflow behavior:
- `backend-test-results` (trx/log output) - Triggered by successful bootstrap (`workflow_run`) or manual dispatch (`image_tag` input)
- `frontend-test-results` (vitest output) - Install kubectl + kustomize on runner
- `infra-validation-output` - Run `kustomize edit set image` in `infra/k8s/overlays/dev`
- Apply manifests with `kubectl apply -k infra/k8s/overlays/dev`
- Ensure namespace `workclub-dev` exists and perform deployment diagnostics
- Enforce branch protection expectation in plan notes: - Enforce branch protection expectation in plan notes:
- Required checks: `backend-ci`, `frontend-ci`, `infra-ci` - Required checks: `backend-ci`, `frontend-ci`, `infra-ci`
- Keep CD out-of-scope in this append (no image push, no deploy steps)
**Must NOT do**: **Must NOT do**:
- Do NOT add deployment jobs (Kubernetes apply/helm/kustomize deploy) - Do NOT collapse bootstrap and deployment into one opaque pipeline stage
- Do NOT add secrets for registry push in this CI-only iteration - Do NOT bypass image-tag pinning in deployment
- Do NOT couple CI workflow to release-tag deployment behavior - Do NOT remove CI validation gates (`backend-ci`, `frontend-ci`, `infra-ci`)
**Recommended Agent Profile**: **Recommended Agent Profile**:
- **Category**: `unspecified-high` - **Category**: `unspecified-high`
@@ -2568,10 +2576,13 @@ Max Concurrent: 6 (Wave 1)
**References**: **References**:
**Pattern References**: **Pattern References**:
- `.gitea/workflows/ci.yml` — Source of truth for CI checks
- `.gitea/workflows/cd-bootstrap.yml` — Source of truth for image publish bootstrap
- `.gitea/workflows/cd-deploy.yml` — Source of truth for deployment apply logic
- `docker-compose.yml` — Source of truth for `docker compose config` validation - `docker-compose.yml` — Source of truth for `docker compose config` validation
- `infra/k8s/base/kustomization.yaml` and `infra/k8s/overlays/dev/kustomization.yaml` — Kustomize build inputs used by infra-ci job - `infra/k8s/base/kustomization.yaml` and `infra/k8s/overlays/dev/kustomization.yaml` — Kustomize build/apply inputs
- `backend/WorkClub.sln` — Backend restore/build/test entrypoint for .NET job - `backend/WorkClub.sln` — Backend restore/build/test entrypoint for .NET job
- `frontend/package.json` + `frontend/bun.lockb` — Frontend scripts and cache key anchor - `frontend/package.json` + `frontend/bun.lock` — Frontend scripts and cache key anchor
**External References**: **External References**:
- Gitea Actions docs: workflow syntax and trigger model (`.gitea/workflows/*.yml`) - Gitea Actions docs: workflow syntax and trigger model (`.gitea/workflows/*.yml`)
@@ -2596,6 +2607,18 @@ Max Concurrent: 6 (Wave 1)
Failure Indicators: Missing job, skipped required job, or non-success conclusion Failure Indicators: Missing job, skipped required job, or non-success conclusion
Evidence: .sisyphus/evidence/task-29-gitea-ci-success.json Evidence: .sisyphus/evidence/task-29-gitea-ci-success.json
Scenario: CD bootstrap and deploy workflows are present and wired
Tool: Bash
Preconditions: Repository contains workflow files
Steps:
1. Assert `.gitea/workflows/cd-bootstrap.yml` exists
2. Assert `.gitea/workflows/cd-deploy.yml` exists
3. Grep bootstrap workflow for buildx multi-arch publish step
4. Grep deploy workflow for `workflow_run`, `kustomize edit set image`, and `kubectl apply -k`
Expected Result: Both CD workflows exist with expected bootstrap and deploy steps
Failure Indicators: Missing file, missing trigger, or missing deploy commands
Evidence: .sisyphus/evidence/task-29-gitea-cd-workflows.txt
Scenario: Pipeline fails on intentional backend break Scenario: Pipeline fails on intentional backend break
Tool: Bash (git + Gitea API) Tool: Bash (git + Gitea API)
Preconditions: Temporary branch available, ability to push test commit Preconditions: Temporary branch available, ability to push test commit
@@ -2611,8 +2634,8 @@ Max Concurrent: 6 (Wave 1)
``` ```
**Commit**: YES **Commit**: YES
- Message: `ci(gitea): add parallel CI workflow for backend, frontend, and infra validation` - Message: `ci(cd): add CI validation plus bootstrap and Kubernetes deployment workflows`
- Files: `.gitea/workflows/ci.yml` - Files: `.gitea/workflows/ci.yml`, `.gitea/workflows/cd-bootstrap.yml`, `.gitea/workflows/cd-deploy.yml`
- Pre-commit: `docker compose config && kustomize build infra/k8s/overlays/dev > /dev/null` - Pre-commit: `docker compose config && kustomize build infra/k8s/overlays/dev > /dev/null`
--- ---
@@ -2662,7 +2685,7 @@ Max Concurrent: 6 (Wave 1)
| 4 | T18-T21 | `feat(ui): add layout, club-switcher, login, task and shift pages` | frontend/src/app/**/*.tsx, frontend/src/components/**/*.tsx | `bun run build && bun run test` | | 4 | T18-T21 | `feat(ui): add layout, club-switcher, login, task and shift pages` | frontend/src/app/**/*.tsx, frontend/src/components/**/*.tsx | `bun run build && bun run test` |
| 5 | T22-T25 | `infra(deploy): add full Docker Compose stack, Dockerfiles, and Kustomize dev overlay` | docker-compose.yml, **/Dockerfile*, infra/k8s/overlays/dev/**/*.yaml | `docker compose config && kustomize build infra/k8s/overlays/dev` | | 5 | T22-T25 | `infra(deploy): add full Docker Compose stack, Dockerfiles, and Kustomize dev overlay` | docker-compose.yml, **/Dockerfile*, infra/k8s/overlays/dev/**/*.yaml | `docker compose config && kustomize build infra/k8s/overlays/dev` |
| 6 | T26-T28 | `test(e2e): add Playwright E2E tests for auth, tasks, and shifts` | frontend/tests/e2e/**/*.spec.ts | `bunx playwright test` | | 6 | T26-T28 | `test(e2e): add Playwright E2E tests for auth, tasks, and shifts` | frontend/tests/e2e/**/*.spec.ts | `bunx playwright test` |
| 6 | T29 | `ci(gitea): add parallel CI workflow for backend, frontend, and infra validation` | .gitea/workflows/ci.yml | `docker compose config && kustomize build infra/k8s/overlays/dev > /dev/null` | | 6 | T29 | `ci(cd): add CI validation plus bootstrap and Kubernetes deployment workflows` | .gitea/workflows/ci.yml, .gitea/workflows/cd-bootstrap.yml, .gitea/workflows/cd-deploy.yml | `docker compose config && kustomize build infra/k8s/overlays/dev > /dev/null` |
--- ---
@@ -2699,6 +2722,12 @@ kustomize build infra/k8s/overlays/dev > /dev/null # Expected: Exit 0
# CI workflow file present and includes required jobs # CI workflow file present and includes required jobs
grep -E "backend-ci|frontend-ci|infra-ci" .gitea/workflows/ci.yml # Expected: all 3 job names present grep -E "backend-ci|frontend-ci|infra-ci" .gitea/workflows/ci.yml # Expected: all 3 job names present
# CD bootstrap workflow present with multi-arch publish
grep -E "buildx|linux/amd64,linux/arm64|workclub-api|workclub-frontend" .gitea/workflows/cd-bootstrap.yml
# CD deploy workflow present with deploy trigger and apply step
grep -E "workflow_run|kustomize edit set image|kubectl apply -k" .gitea/workflows/cd-deploy.yml
``` ```
### Final Checklist ### Final Checklist
@@ -2710,6 +2739,7 @@ grep -E "backend-ci|frontend-ci|infra-ci" .gitea/workflows/ci.yml # Expected: a
- [x] Docker Compose stack starts clean and healthy - [x] Docker Compose stack starts clean and healthy
- [x] Kustomize manifests build without errors - [x] Kustomize manifests build without errors
- [x] Gitea CI workflow exists and references backend-ci/frontend-ci/infra-ci - [x] Gitea CI workflow exists and references backend-ci/frontend-ci/infra-ci
- [x] Gitea CD bootstrap and deploy workflows exist and are wired to image publish/deploy steps
- [x] RLS isolation proven at database level - [x] RLS isolation proven at database level
- [x] Cross-tenant access returns 403 - [x] Cross-tenant access returns 403
- [x] Task state machine rejects invalid transitions (422) - [x] Task state machine rejects invalid transitions (422)
@@ -85,7 +85,7 @@ public class ClubRoleClaimsTransformation : IClaimsTransformation
{ {
return clubRole switch return clubRole switch
{ {
ClubRole.Admin => "Admin",
ClubRole.Manager => "Manager", ClubRole.Manager => "Manager",
ClubRole.Member => "Member", ClubRole.Member => "Member",
ClubRole.Viewer => "Viewer", ClubRole.Viewer => "Viewer",
@@ -0,0 +1,67 @@
using Microsoft.AspNetCore.Http.HttpResults;
using Microsoft.AspNetCore.Mvc;
using WorkClub.Api.Services;
using WorkClub.Application.Clubs.DTOs;
namespace WorkClub.Api.Endpoints.Clubs;
public static class AdminClubEndpoints
{
public static void MapAdminClubEndpoints(this IEndpointRouteBuilder app)
{
var group = app.MapGroup("/api/admin/clubs")
.RequireAuthorization("RequireGlobalAdmin")
.WithTags("AdminClubs");
group.MapGet("", GetClubs)
.WithName("AdminGetClubs");
group.MapPost("", CreateClub)
.WithName("AdminCreateClub");
group.MapPut("{id:guid}", UpdateClub)
.WithName("AdminUpdateClub");
group.MapDelete("{id:guid}", DeleteClub)
.WithName("AdminDeleteClub");
}
private static async Task<Ok<List<ClubDetailDto>>> GetClubs(AdminClubService adminClubService)
{
var result = await adminClubService.GetAllClubsAsync();
return TypedResults.Ok(result);
}
private static async Task<Created<ClubDetailDto>> CreateClub(
[FromBody] CreateClubRequest request,
AdminClubService adminClubService)
{
var result = await adminClubService.CreateClubAsync(request);
return TypedResults.Created($"/api/admin/clubs/{result.Id}", result);
}
private static async Task<Results<Ok<ClubDetailDto>, NotFound>> UpdateClub(
Guid id,
[FromBody] UpdateClubRequest request,
AdminClubService adminClubService)
{
var (result, error) = await adminClubService.UpdateClubAsync(id, request);
if (error != null)
return TypedResults.NotFound();
return TypedResults.Ok(result!);
}
private static async Task<Results<NoContent, NotFound>> DeleteClub(
Guid id,
AdminClubService adminClubService)
{
var success = await adminClubService.DeleteClubAsync(id);
if (!success)
return TypedResults.NotFound();
return TypedResults.NoContent();
}
}
@@ -28,7 +28,7 @@ public static class ShiftEndpoints
.WithName("UpdateShift"); .WithName("UpdateShift");
group.MapDelete("{id:guid}", DeleteShift) group.MapDelete("{id:guid}", DeleteShift)
.RequireAuthorization("RequireAdmin") .RequireAuthorization("RequireManager")
.WithName("DeleteShift"); .WithName("DeleteShift");
group.MapPost("{id:guid}/signup", SignUpForShift) group.MapPost("{id:guid}/signup", SignUpForShift)
@@ -28,7 +28,7 @@ public static class TaskEndpoints
.WithName("UpdateTask"); .WithName("UpdateTask");
group.MapDelete("{id:guid}", DeleteTask) group.MapDelete("{id:guid}", DeleteTask)
.RequireAuthorization("RequireAdmin") .RequireAuthorization("RequireManager")
.WithName("DeleteTask"); .WithName("DeleteTask");
group.MapPost("{id:guid}/assign", AssignTaskToMe) group.MapPost("{id:guid}/assign", AssignTaskToMe)
@@ -22,8 +22,9 @@ public class TenantValidationMiddleware
return; return;
} }
// Exempt /api/clubs/me from tenant validation - this is the bootstrap endpoint // Exempt bootstrap and admin endpoints from tenant validation
if (context.Request.Path.StartsWithSegments("/api/clubs/me")) if (context.Request.Path.StartsWithSegments("/api/clubs/me") ||
context.Request.Path.StartsWithSegments("/api/admin"))
{ {
_logger.LogInformation("TenantValidationMiddleware: Exempting {Path} from tenant validation", context.Request.Path); _logger.LogInformation("TenantValidationMiddleware: Exempting {Path} from tenant validation", context.Request.Path);
await _next(context); await _next(context);
+9 -3
View File
@@ -24,6 +24,7 @@ builder.Services.AddScoped<SeedDataService>();
builder.Services.AddScoped<TaskService>(); builder.Services.AddScoped<TaskService>();
builder.Services.AddScoped<ShiftService>(); builder.Services.AddScoped<ShiftService>();
builder.Services.AddScoped<ClubService>(); builder.Services.AddScoped<ClubService>();
builder.Services.AddScoped<AdminClubService>();
builder.Services.AddScoped<MemberService>(); builder.Services.AddScoped<MemberService>();
builder.Services.AddScoped<MemberSyncService>(); builder.Services.AddScoped<MemberSyncService>();
@@ -49,9 +50,13 @@ builder.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
builder.Services.AddScoped<IClaimsTransformation, ClubRoleClaimsTransformation>(); builder.Services.AddScoped<IClaimsTransformation, ClubRoleClaimsTransformation>();
builder.Services.AddAuthorizationBuilder() builder.Services.AddAuthorizationBuilder()
.AddPolicy("RequireAdmin", policy => policy.RequireRole("Admin")) .AddPolicy("RequireGlobalAdmin", policy => policy.RequireAssertion(context =>
.AddPolicy("RequireManager", policy => policy.RequireRole("Admin", "Manager")) {
.AddPolicy("RequireMember", policy => policy.RequireRole("Admin", "Manager", "Member")) var realmAccess = context.User.FindFirst("realm_access")?.Value;
return realmAccess != null && realmAccess.Contains("\"admin\"");
}))
.AddPolicy("RequireManager", policy => policy.RequireRole("Manager"))
.AddPolicy("RequireMember", policy => policy.RequireRole("Manager", "Member"))
.AddPolicy("RequireViewer", policy => policy.RequireAuthenticatedUser()); .AddPolicy("RequireViewer", policy => policy.RequireAuthenticatedUser());
builder.Services.AddDbContext<AppDbContext>((sp, options) => builder.Services.AddDbContext<AppDbContext>((sp, options) =>
@@ -122,6 +127,7 @@ app.MapGet("/api/test", () => Results.Ok(new { message = "Test endpoint" }))
app.MapTaskEndpoints(); app.MapTaskEndpoints();
app.MapShiftEndpoints(); app.MapShiftEndpoints();
app.MapClubEndpoints(); app.MapClubEndpoints();
app.MapAdminClubEndpoints();
app.MapMemberEndpoints(); app.MapMemberEndpoints();
app.Run(); app.Run();
@@ -0,0 +1,113 @@
using Microsoft.EntityFrameworkCore;
using Npgsql;
using WorkClub.Application.Clubs.DTOs;
using WorkClub.Domain.Entities;
using WorkClub.Infrastructure.Data;
namespace WorkClub.Api.Services;
public class AdminClubService
{
private readonly AppDbContext _context;
public AdminClubService(AppDbContext context)
{
_context = context;
}
public async Task<List<ClubDetailDto>> GetAllClubsAsync()
{
var strategy = _context.Database.CreateExecutionStrategy();
return await strategy.ExecuteAsync(async () =>
{
await using var transaction = await _context.Database.BeginTransactionAsync();
await _context.Database.ExecuteSqlRawAsync("SET LOCAL ROLE app_admin");
var clubs = await _context.Clubs.ToListAsync();
await _context.Database.ExecuteSqlRawAsync("RESET ROLE");
await transaction.CommitAsync();
return clubs.Select(c => new ClubDetailDto(
c.Id, c.Name, c.SportType.ToString(), c.Description, c.CreatedAt, c.UpdatedAt)).ToList();
});
}
public async Task<ClubDetailDto> CreateClubAsync(CreateClubRequest request)
{
var tenantId = Guid.NewGuid().ToString();
var club = new Club
{
Id = Guid.NewGuid(),
TenantId = tenantId,
Name = request.Name,
SportType = request.SportType,
Description = request.Description,
CreatedAt = DateTimeOffset.UtcNow,
UpdatedAt = DateTimeOffset.UtcNow
};
var strategy = _context.Database.CreateExecutionStrategy();
await strategy.ExecuteAsync(async () =>
{
await using var transaction = await _context.Database.BeginTransactionAsync();
await _context.Database.ExecuteSqlRawAsync("SET LOCAL ROLE app_admin");
_context.Clubs.Add(club);
await _context.SaveChangesAsync();
await _context.Database.ExecuteSqlRawAsync("RESET ROLE");
await transaction.CommitAsync();
});
return new ClubDetailDto(club.Id, club.Name, club.SportType.ToString(), club.Description, club.CreatedAt, club.UpdatedAt);
}
public async Task<(ClubDetailDto? club, string? error)> UpdateClubAsync(Guid id, UpdateClubRequest request)
{
var strategy = _context.Database.CreateExecutionStrategy();
return await strategy.ExecuteAsync<(ClubDetailDto? club, string? error)>(async () =>
{
await using var transaction = await _context.Database.BeginTransactionAsync();
await _context.Database.ExecuteSqlRawAsync("SET LOCAL ROLE app_admin");
var club = await _context.Clubs.FindAsync(id);
if (club == null)
{
await _context.Database.ExecuteSqlRawAsync("RESET ROLE");
return (null, "Club not found");
}
club.Name = request.Name;
club.SportType = request.SportType;
club.Description = request.Description;
club.UpdatedAt = DateTimeOffset.UtcNow;
await _context.SaveChangesAsync();
await _context.Database.ExecuteSqlRawAsync("RESET ROLE");
await transaction.CommitAsync();
return (new ClubDetailDto(club.Id, club.Name, club.SportType.ToString(), club.Description, club.CreatedAt, club.UpdatedAt), null);
});
}
public async Task<bool> DeleteClubAsync(Guid id)
{
var strategy = _context.Database.CreateExecutionStrategy();
return await strategy.ExecuteAsync<bool>(async () =>
{
await using var transaction = await _context.Database.BeginTransactionAsync();
await _context.Database.ExecuteSqlRawAsync("SET LOCAL ROLE app_admin");
var club = await _context.Clubs.FindAsync(id);
if (club == null)
{
await _context.Database.ExecuteSqlRawAsync("RESET ROLE");
return false;
}
_context.Clubs.Remove(club);
await _context.SaveChangesAsync();
await _context.Database.ExecuteSqlRawAsync("RESET ROLE");
await transaction.CommitAsync();
return true;
});
}
}
@@ -60,7 +60,6 @@ public class MemberSyncService
var roleClaim = httpContext.User.FindFirst(System.Security.Claims.ClaimTypes.Role)?.Value ?? "Member"; var roleClaim = httpContext.User.FindFirst(System.Security.Claims.ClaimTypes.Role)?.Value ?? "Member";
var clubRole = roleClaim.ToLowerInvariant() switch var clubRole = roleClaim.ToLowerInvariant() switch
{ {
"admin" => ClubRole.Admin,
"manager" => ClubRole.Manager, "manager" => ClubRole.Manager,
"member" => ClubRole.Member, "member" => ClubRole.Member,
"viewer" => ClubRole.Viewer, "viewer" => ClubRole.Viewer,
@@ -0,0 +1,9 @@
using WorkClub.Domain.Enums;
namespace WorkClub.Application.Clubs.DTOs;
public record CreateClubRequest(
string Name,
SportType SportType,
string? Description
);
@@ -0,0 +1,9 @@
using WorkClub.Domain.Enums;
namespace WorkClub.Application.Clubs.DTOs;
public record UpdateClubRequest(
string Name,
SportType SportType,
string? Description
);
@@ -2,7 +2,6 @@ namespace WorkClub.Domain.Enums;
public enum ClubRole public enum ClubRole
{ {
Admin = 0,
Manager = 1, Manager = 1,
Member = 2, Member = 2,
Viewer = 3 Viewer = 3
@@ -26,7 +26,7 @@ public class SeedDataService
using var transaction = await context.Database.BeginTransactionAsync(); using var transaction = await context.Database.BeginTransactionAsync();
// Enable RLS on all tenant tables // Enable RLS on all tenant tables (Must be table owner, which 'workclub' is)
await context.Database.ExecuteSqlRawAsync(@" await context.Database.ExecuteSqlRawAsync(@"
ALTER TABLE clubs ENABLE ROW LEVEL SECURITY; ALTER TABLE clubs ENABLE ROW LEVEL SECURITY;
ALTER TABLE clubs FORCE ROW LEVEL SECURITY; ALTER TABLE clubs FORCE ROW LEVEL SECURITY;
@@ -124,31 +124,7 @@ public class SeedDataService
{ {
var members = new List<Member> var members = new List<Member>
{ {
// admin@test.com: Admin in Club 1, Member in Club 2
new Member
{
Id = Guid.NewGuid(),
TenantId = tennisClub.TenantId,
ExternalUserId = "admin-user-id",
DisplayName = "Admin User",
Email = "admin@test.com",
Role = ClubRole.Admin,
ClubId = tennisClub.Id,
CreatedAt = DateTimeOffset.UtcNow,
UpdatedAt = DateTimeOffset.UtcNow
},
new Member
{
Id = Guid.NewGuid(),
TenantId = cyclingClub.TenantId,
ExternalUserId = "admin-user-id",
DisplayName = "Admin User",
Email = "admin@test.com",
Role = ClubRole.Member,
ClubId = cyclingClub.Id,
CreatedAt = DateTimeOffset.UtcNow,
UpdatedAt = DateTimeOffset.UtcNow
},
// manager@test.com: Manager in Club 1 // manager@test.com: Manager in Club 1
new Member new Member
{ {
@@ -219,8 +195,7 @@ public class SeedDataService
await context.SaveChangesAsync(); await context.SaveChangesAsync();
} }
// Get admin member IDs for work item creation
var adminMembers = context.Members.Where(m => m.Email == "admin@test.com").ToList();
var managerMember = context.Members.First(m => m.Email == "manager@test.com"); var managerMember = context.Members.First(m => m.Email == "manager@test.com");
var member1Members = context.Members.Where(m => m.Email == "member1@test.com").ToList(); var member1Members = context.Members.Where(m => m.Email == "member1@test.com").ToList();
var member2Member = context.Members.First(m => m.Email == "member2@test.com"); var member2Member = context.Members.First(m => m.Email == "member2@test.com");
@@ -239,7 +214,7 @@ public class SeedDataService
Description = "Resurface main court", Description = "Resurface main court",
Status = WorkItemStatus.Open, Status = WorkItemStatus.Open,
AssigneeId = null, AssigneeId = null,
CreatedById = adminMembers.First(m => m.ClubId == tennisClub.Id).Id, CreatedById = managerMember.Id,
ClubId = tennisClub.Id, ClubId = tennisClub.Id,
DueDate = DateTimeOffset.UtcNow.AddDays(14), DueDate = DateTimeOffset.UtcNow.AddDays(14),
CreatedAt = DateTimeOffset.UtcNow, CreatedAt = DateTimeOffset.UtcNow,
@@ -253,7 +228,7 @@ public class SeedDataService
Description = "Purchase new tennis rackets and balls", Description = "Purchase new tennis rackets and balls",
Status = WorkItemStatus.Assigned, Status = WorkItemStatus.Assigned,
AssigneeId = managerMember.Id, AssigneeId = managerMember.Id,
CreatedById = adminMembers.First(m => m.ClubId == tennisClub.Id).Id, CreatedById = managerMember.Id,
ClubId = tennisClub.Id, ClubId = tennisClub.Id,
DueDate = DateTimeOffset.UtcNow.AddDays(7), DueDate = DateTimeOffset.UtcNow.AddDays(7),
CreatedAt = DateTimeOffset.UtcNow, CreatedAt = DateTimeOffset.UtcNow,
@@ -267,7 +242,7 @@ public class SeedDataService
Description = "Organize annual summer tournament", Description = "Organize annual summer tournament",
Status = WorkItemStatus.InProgress, Status = WorkItemStatus.InProgress,
AssigneeId = member1Members.First(m => m.ClubId == tennisClub.Id).Id, AssigneeId = member1Members.First(m => m.ClubId == tennisClub.Id).Id,
CreatedById = adminMembers.First(m => m.ClubId == tennisClub.Id).Id, CreatedById = managerMember.Id,
ClubId = tennisClub.Id, ClubId = tennisClub.Id,
DueDate = DateTimeOffset.UtcNow.AddDays(30), DueDate = DateTimeOffset.UtcNow.AddDays(30),
CreatedAt = DateTimeOffset.UtcNow, CreatedAt = DateTimeOffset.UtcNow,
@@ -281,7 +256,7 @@ public class SeedDataService
Description = "Update and review club rules handbook", Description = "Update and review club rules handbook",
Status = WorkItemStatus.Review, Status = WorkItemStatus.Review,
AssigneeId = member2Member.Id, AssigneeId = member2Member.Id,
CreatedById = adminMembers.First(m => m.ClubId == tennisClub.Id).Id, CreatedById = managerMember.Id,
ClubId = tennisClub.Id, ClubId = tennisClub.Id,
DueDate = DateTimeOffset.UtcNow.AddDays(21), DueDate = DateTimeOffset.UtcNow.AddDays(21),
CreatedAt = DateTimeOffset.UtcNow, CreatedAt = DateTimeOffset.UtcNow,
@@ -295,7 +270,7 @@ public class SeedDataService
Description = "Update club website with new photos", Description = "Update club website with new photos",
Status = WorkItemStatus.Done, Status = WorkItemStatus.Done,
AssigneeId = managerMember.Id, AssigneeId = managerMember.Id,
CreatedById = adminMembers.First(m => m.ClubId == tennisClub.Id).Id, CreatedById = managerMember.Id,
ClubId = tennisClub.Id, ClubId = tennisClub.Id,
DueDate = DateTimeOffset.UtcNow.AddDays(-5), DueDate = DateTimeOffset.UtcNow.AddDays(-5),
CreatedAt = DateTimeOffset.UtcNow.AddDays(-10), CreatedAt = DateTimeOffset.UtcNow.AddDays(-10),
@@ -310,7 +285,7 @@ public class SeedDataService
Description = "Create new cycling routes for summer", Description = "Create new cycling routes for summer",
Status = WorkItemStatus.Open, Status = WorkItemStatus.Open,
AssigneeId = null, AssigneeId = null,
CreatedById = adminMembers.First(m => m.ClubId == cyclingClub.Id).Id, CreatedById = member1Members.First(m => m.ClubId == cyclingClub.Id).Id,
ClubId = cyclingClub.Id, ClubId = cyclingClub.Id,
DueDate = DateTimeOffset.UtcNow.AddDays(21), DueDate = DateTimeOffset.UtcNow.AddDays(21),
CreatedAt = DateTimeOffset.UtcNow, CreatedAt = DateTimeOffset.UtcNow,
@@ -324,7 +299,7 @@ public class SeedDataService
Description = "Organize safety and maintenance training", Description = "Organize safety and maintenance training",
Status = WorkItemStatus.Assigned, Status = WorkItemStatus.Assigned,
AssigneeId = member1Members.First(m => m.ClubId == cyclingClub.Id).Id, AssigneeId = member1Members.First(m => m.ClubId == cyclingClub.Id).Id,
CreatedById = adminMembers.First(m => m.ClubId == cyclingClub.Id).Id, CreatedById = member1Members.First(m => m.ClubId == cyclingClub.Id).Id,
ClubId = cyclingClub.Id, ClubId = cyclingClub.Id,
DueDate = DateTimeOffset.UtcNow.AddDays(14), DueDate = DateTimeOffset.UtcNow.AddDays(14),
CreatedAt = DateTimeOffset.UtcNow, CreatedAt = DateTimeOffset.UtcNow,
@@ -337,8 +312,8 @@ public class SeedDataService
Title = "Group ride coordination", Title = "Group ride coordination",
Description = "Schedule and coordinate weekly group rides", Description = "Schedule and coordinate weekly group rides",
Status = WorkItemStatus.InProgress, Status = WorkItemStatus.InProgress,
AssigneeId = adminMembers.First(m => m.ClubId == cyclingClub.Id).Id, AssigneeId = member1Members.First(m => m.ClubId == cyclingClub.Id).Id,
CreatedById = adminMembers.First(m => m.ClubId == cyclingClub.Id).Id, CreatedById = member1Members.First(m => m.ClubId == cyclingClub.Id).Id,
ClubId = cyclingClub.Id, ClubId = cyclingClub.Id,
DueDate = DateTimeOffset.UtcNow.AddDays(7), DueDate = DateTimeOffset.UtcNow.AddDays(7),
CreatedAt = DateTimeOffset.UtcNow, CreatedAt = DateTimeOffset.UtcNow,
@@ -368,7 +343,7 @@ public class SeedDataService
EndTime = now.AddDays(-1).Date.ToLocalTime().AddHours(12), EndTime = now.AddDays(-1).Date.ToLocalTime().AddHours(12),
Capacity = 2, Capacity = 2,
ClubId = tennisClub.Id, ClubId = tennisClub.Id,
CreatedById = adminMembers.First(m => m.ClubId == tennisClub.Id).Id, CreatedById = managerMember.Id,
CreatedAt = DateTimeOffset.UtcNow, CreatedAt = DateTimeOffset.UtcNow,
UpdatedAt = DateTimeOffset.UtcNow UpdatedAt = DateTimeOffset.UtcNow
}, },
@@ -383,7 +358,7 @@ public class SeedDataService
EndTime = now.Date.ToLocalTime().AddHours(18), EndTime = now.Date.ToLocalTime().AddHours(18),
Capacity = 3, Capacity = 3,
ClubId = tennisClub.Id, ClubId = tennisClub.Id,
CreatedById = adminMembers.First(m => m.ClubId == tennisClub.Id).Id, CreatedById = managerMember.Id,
CreatedAt = DateTimeOffset.UtcNow, CreatedAt = DateTimeOffset.UtcNow,
UpdatedAt = DateTimeOffset.UtcNow UpdatedAt = DateTimeOffset.UtcNow
}, },
@@ -398,7 +373,7 @@ public class SeedDataService
EndTime = now.AddDays(7).Date.ToLocalTime().AddHours(17), EndTime = now.AddDays(7).Date.ToLocalTime().AddHours(17),
Capacity = 5, Capacity = 5,
ClubId = tennisClub.Id, ClubId = tennisClub.Id,
CreatedById = adminMembers.First(m => m.ClubId == tennisClub.Id).Id, CreatedById = managerMember.Id,
CreatedAt = DateTimeOffset.UtcNow, CreatedAt = DateTimeOffset.UtcNow,
UpdatedAt = DateTimeOffset.UtcNow UpdatedAt = DateTimeOffset.UtcNow
}, },
@@ -414,7 +389,7 @@ public class SeedDataService
EndTime = now.Date.ToLocalTime().AddHours(9), EndTime = now.Date.ToLocalTime().AddHours(9),
Capacity = 10, Capacity = 10,
ClubId = cyclingClub.Id, ClubId = cyclingClub.Id,
CreatedById = adminMembers.First(m => m.ClubId == cyclingClub.Id).Id, CreatedById = member1Members.First(m => m.ClubId == cyclingClub.Id).Id,
CreatedAt = DateTimeOffset.UtcNow, CreatedAt = DateTimeOffset.UtcNow,
UpdatedAt = DateTimeOffset.UtcNow UpdatedAt = DateTimeOffset.UtcNow
}, },
@@ -429,7 +404,7 @@ public class SeedDataService
EndTime = now.AddDays(7).Date.ToLocalTime().AddHours(14), EndTime = now.AddDays(7).Date.ToLocalTime().AddHours(14),
Capacity = 4, Capacity = 4,
ClubId = cyclingClub.Id, ClubId = cyclingClub.Id,
CreatedById = adminMembers.First(m => m.ClubId == cyclingClub.Id).Id, CreatedById = member1Members.First(m => m.ClubId == cyclingClub.Id).Id,
CreatedAt = DateTimeOffset.UtcNow, CreatedAt = DateTimeOffset.UtcNow,
UpdatedAt = DateTimeOffset.UtcNow UpdatedAt = DateTimeOffset.UtcNow
} }
@@ -69,7 +69,7 @@ public class ClubEndpointsTests : IntegrationTestBase
ExternalUserId = adminUserId, ExternalUserId = adminUserId,
DisplayName = "Admin User", DisplayName = "Admin User",
Email = "admin@test.com", Email = "admin@test.com",
Role = ClubRole.Admin, Role = ClubRole.Manager,
ClubId = club1Id, ClubId = club1Id,
CreatedAt = DateTimeOffset.UtcNow, CreatedAt = DateTimeOffset.UtcNow,
UpdatedAt = DateTimeOffset.UtcNow UpdatedAt = DateTimeOffset.UtcNow
@@ -60,7 +60,7 @@ public class MemberEndpointsTests : IntegrationTestBase
ExternalUserId = "admin-user-id", ExternalUserId = "admin-user-id",
DisplayName = "Admin User", DisplayName = "Admin User",
Email = "admin@test.com", Email = "admin@test.com",
Role = ClubRole.Admin, Role = ClubRole.Manager,
ClubId = club1Id, ClubId = club1Id,
CreatedAt = DateTimeOffset.UtcNow, CreatedAt = DateTimeOffset.UtcNow,
UpdatedAt = DateTimeOffset.UtcNow UpdatedAt = DateTimeOffset.UtcNow
@@ -303,55 +303,7 @@ public class ShiftCrudTests : IntegrationTestBase
} }
[Fact] [Fact]
public async Task DeleteShift_AsAdmin_DeletesShift() public async Task DeleteShift_AsManager_DeletesShift()
{
// Arrange
var shiftId = Guid.NewGuid();
var clubId = Guid.NewGuid();
var createdBy = Guid.NewGuid();
var now = DateTimeOffset.UtcNow;
using (var scope = Factory.Services.CreateScope())
{
var context = scope.ServiceProvider.GetRequiredService<AppDbContext>();
context.Shifts.Add(new Shift
{
Id = shiftId,
TenantId = "tenant1",
Title = "Test Shift",
StartTime = now.AddDays(1),
EndTime = now.AddDays(1).AddHours(4),
Capacity = 5,
ClubId = clubId,
CreatedById = createdBy,
CreatedAt = now,
UpdatedAt = now
});
await context.SaveChangesAsync();
}
SetTenant("tenant1");
AuthenticateAs("admin@test.com", new Dictionary<string, string> { ["tenant1"] = "Admin" });
// Act
var response = await Client.DeleteAsync($"/api/shifts/{shiftId}");
// Assert
Assert.Equal(HttpStatusCode.NoContent, response.StatusCode);
// Verify shift is deleted
using (var scope = Factory.Services.CreateScope())
{
var context = scope.ServiceProvider.GetRequiredService<AppDbContext>();
var shift = await context.Shifts.FindAsync(shiftId);
Assert.Null(shift);
}
}
[Fact]
public async Task DeleteShift_AsManager_ReturnsForbidden()
{ {
// Arrange // Arrange
var shiftId = Guid.NewGuid(); var shiftId = Guid.NewGuid();
@@ -387,7 +339,15 @@ public class ShiftCrudTests : IntegrationTestBase
var response = await Client.DeleteAsync($"/api/shifts/{shiftId}"); var response = await Client.DeleteAsync($"/api/shifts/{shiftId}");
// Assert // Assert
Assert.Equal(HttpStatusCode.Forbidden, response.StatusCode); Assert.Equal(HttpStatusCode.NoContent, response.StatusCode);
// Verify shift is deleted
using (var scope = Factory.Services.CreateScope())
{
var context = scope.ServiceProvider.GetRequiredService<AppDbContext>();
var shift = await context.Shifts.FindAsync(shiftId);
Assert.Null(shift);
}
} }
[Fact] [Fact]
@@ -387,52 +387,7 @@ public class TaskCrudTests : IntegrationTestBase
} }
[Fact] [Fact]
public async Task DeleteTask_AsAdmin_DeletesTask() public async Task DeleteTask_AsManager_DeletesTask()
{
// Arrange
var taskId = Guid.NewGuid();
var club1 = Guid.NewGuid();
var createdBy = Guid.NewGuid();
using (var scope = Factory.Services.CreateScope())
{
var context = scope.ServiceProvider.GetRequiredService<AppDbContext>();
context.WorkItems.Add(new WorkItem
{
Id = taskId,
TenantId = "tenant1",
Title = "Test Task",
Status = WorkItemStatus.Open,
ClubId = club1,
CreatedById = createdBy,
CreatedAt = DateTimeOffset.UtcNow,
UpdatedAt = DateTimeOffset.UtcNow
});
await context.SaveChangesAsync();
}
SetTenant("tenant1");
AuthenticateAs("admin@test.com", new Dictionary<string, string> { ["tenant1"] = "Admin" });
// Act
var response = await Client.DeleteAsync($"/api/tasks/{taskId}");
// Assert
Assert.Equal(HttpStatusCode.NoContent, response.StatusCode);
// Verify task is deleted
using (var scope = Factory.Services.CreateScope())
{
var context = scope.ServiceProvider.GetRequiredService<AppDbContext>();
var task = await context.WorkItems.FindAsync(taskId);
Assert.Null(task);
}
}
[Fact]
public async Task DeleteTask_AsManager_ReturnsForbidden()
{ {
// Arrange // Arrange
var taskId = Guid.NewGuid(); var taskId = Guid.NewGuid();
@@ -465,7 +420,15 @@ public class TaskCrudTests : IntegrationTestBase
var response = await Client.DeleteAsync($"/api/tasks/{taskId}"); var response = await Client.DeleteAsync($"/api/tasks/{taskId}");
// Assert // Assert
Assert.Equal(HttpStatusCode.Forbidden, response.StatusCode); Assert.Equal(HttpStatusCode.NoContent, response.StatusCode);
// Verify task is deleted
using (var scope = Factory.Services.CreateScope())
{
var context = scope.ServiceProvider.GetRequiredService<AppDbContext>();
var task = await context.WorkItems.FindAsync(taskId);
Assert.Null(task);
}
} }
} }
Binary file not shown.

Before

Width:  |  Height:  |  Size: 300 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 205 KiB

@@ -0,0 +1,12 @@
import { ClubManagement } from '@/components/admin/club-management';
export default function AdminClubsPage() {
return (
<div className="max-w-6xl mx-auto space-y-6">
<div className="flex items-center justify-between">
<h1 className="text-3xl font-bold">Club Management</h1>
</div>
<ClubManagement />
</div>
);
}
+15 -1
View File
@@ -1,13 +1,19 @@
'use client';
import { AuthGuard } from '@/components/auth-guard'; import { AuthGuard } from '@/components/auth-guard';
import { ClubSwitcher } from '@/components/club-switcher'; import { ClubSwitcher } from '@/components/club-switcher';
import Link from 'next/link'; import Link from 'next/link';
import { SignOutButton } from '@/components/sign-out-button'; import { SignOutButton } from '@/components/sign-out-button';
import { useSession } from 'next-auth/react';
export default function ProtectedLayout({ export default function ProtectedLayout({
children, children,
}: { }: {
children: React.ReactNode; children: React.ReactNode;
}) { }) {
const { data } = useSession();
const isAdmin = data?.user?.isAdmin;
return ( return (
<AuthGuard> <AuthGuard>
<div className="flex min-h-screen bg-gray-50"> <div className="flex min-h-screen bg-gray-50">
@@ -15,6 +21,13 @@ export default function ProtectedLayout({
<div className="p-4 border-b"> <div className="p-4 border-b">
<h1 className="text-xl font-bold">WorkClub</h1> <h1 className="text-xl font-bold">WorkClub</h1>
</div> </div>
{isAdmin ? (
<nav className="flex-1 p-4 space-y-2">
<Link href="/admin/clubs" className="flex items-center px-4 py-2 text-sm font-medium rounded-md hover:bg-gray-100">
Club Management
</Link>
</nav>
) : (
<nav className="flex-1 p-4 space-y-2"> <nav className="flex-1 p-4 space-y-2">
<Link href="/dashboard" className="flex items-center px-4 py-2 text-sm font-medium rounded-md hover:bg-gray-100"> <Link href="/dashboard" className="flex items-center px-4 py-2 text-sm font-medium rounded-md hover:bg-gray-100">
Dashboard Dashboard
@@ -29,12 +42,13 @@ export default function ProtectedLayout({
Members Members
</Link> </Link>
</nav> </nav>
)}
</aside> </aside>
<div className="flex-1 flex flex-col"> <div className="flex-1 flex flex-col">
<header className="bg-white border-b h-16 flex items-center justify-between px-6"> <header className="bg-white border-b h-16 flex items-center justify-between px-6">
<div className="flex items-center gap-4"> <div className="flex items-center gap-4">
<ClubSwitcher /> {!isAdmin && <ClubSwitcher />}
</div> </div>
<div className="flex items-center gap-4"> <div className="flex items-center gap-4">
<SignOutButton /> <SignOutButton />
@@ -7,7 +7,6 @@ import { Button } from '@/components/ui/button';
import { Progress } from '@/components/ui/progress'; import { Progress } from '@/components/ui/progress';
import { Badge } from '@/components/ui/badge'; import { Badge } from '@/components/ui/badge';
import { useRouter } from 'next/navigation'; import { useRouter } from 'next/navigation';
import { useSession } from 'next-auth/react';
export default function ShiftDetailPage({ params }: { params: Promise<{ id: string }> }) { export default function ShiftDetailPage({ params }: { params: Promise<{ id: string }> }) {
const resolvedParams = use(params); const resolvedParams = use(params);
@@ -15,7 +14,6 @@ export default function ShiftDetailPage({ params }: { params: Promise<{ id: stri
const signUpMutation = useSignUpShift(); const signUpMutation = useSignUpShift();
const cancelMutation = useCancelSignUp(); const cancelMutation = useCancelSignUp();
const router = useRouter(); const router = useRouter();
const { data: session } = useSession();
if (isLoading) return <div>Loading shift...</div>; if (isLoading) return <div>Loading shift...</div>;
if (!shift) return <div>Shift not found</div>; if (!shift) return <div>Shift not found</div>;
+28 -3
View File
@@ -1,7 +1,7 @@
'use client'; 'use client';
import { useEffect, Suspense } from 'react'; import { useEffect, Suspense } from 'react';
import { signIn, signOut, useSession } from 'next-auth/react'; import { signOut, useSession } from 'next-auth/react';
import { useRouter, useSearchParams } from 'next/navigation'; import { useRouter, useSearchParams } from 'next/navigation';
import { Card, CardHeader, CardTitle, CardContent, CardFooter } from '@/components/ui/card'; import { Card, CardHeader, CardTitle, CardContent, CardFooter } from '@/components/ui/card';
import { Button } from '@/components/ui/button'; import { Button } from '@/components/ui/button';
@@ -18,8 +18,33 @@ function LoginContent() {
} }
}, [status, router]); }, [status, router]);
const handleSignIn = () => { const handleSignIn = async () => {
signIn('keycloak', { callbackUrl: '/dashboard' }); const csrfResponse = await fetch('/api/auth/csrf');
const csrfPayload = await csrfResponse.json() as { csrfToken?: string };
if (!csrfPayload.csrfToken) {
window.location.href = '/api/auth/signin?callbackUrl=%2Fdashboard';
return;
}
const form = document.createElement('form');
form.method = 'POST';
form.action = '/api/auth/signin/keycloak';
const csrfInput = document.createElement('input');
csrfInput.type = 'hidden';
csrfInput.name = 'csrfToken';
csrfInput.value = csrfPayload.csrfToken;
form.appendChild(csrfInput);
const callbackInput = document.createElement('input');
callbackInput.type = 'hidden';
callbackInput.name = 'callbackUrl';
callbackInput.value = `${window.location.origin}/dashboard`;
form.appendChild(callbackInput);
document.body.appendChild(form);
form.submit();
}; };
const handleSwitchAccount = () => { const handleSwitchAccount = () => {
+13 -2
View File
@@ -9,6 +9,7 @@ declare module "next-auth" {
email?: string | null email?: string | null
image?: string | null image?: string | null
clubs?: Record<string, string> clubs?: Record<string, string>
isAdmin?: boolean
} }
accessToken?: string accessToken?: string
} }
@@ -16,6 +17,7 @@ declare module "next-auth" {
interface JWT { interface JWT {
clubs?: Record<string, string> clubs?: Record<string, string>
accessToken?: string accessToken?: string
isAdmin?: boolean
} }
} }
@@ -43,10 +45,18 @@ export const { handlers, signIn, signOut, auth } = NextAuth({
], ],
callbacks: { callbacks: {
async jwt({ token, account }) { async jwt({ token, account }) {
if (account) { if (account && account.access_token) {
// Add clubs claim from Keycloak access token // Add clubs claim from Keycloak access token
token.clubs = (account as Record<string, unknown>).clubs as Record<string, string> || {} token.clubs = (account as { clubs?: Record<string, string> }).clubs || {}
token.accessToken = account.access_token token.accessToken = account.access_token
try {
const payload = JSON.parse(Buffer.from((token.accessToken as string).split('.')[1], 'base64').toString());
const roles = (payload.realm_access?.roles as string[]) || [];
token.isAdmin = roles.includes('admin');
} catch {
token.isAdmin = false;
}
} }
return token return token
}, },
@@ -54,6 +64,7 @@ export const { handlers, signIn, signOut, auth } = NextAuth({
// Expose clubs to client // Expose clubs to client
if (session.user) { if (session.user) {
session.user.clubs = token.clubs as Record<string, string> | undefined session.user.clubs = token.clubs as Record<string, string> | undefined
session.user.isAdmin = token.isAdmin as boolean | undefined
} }
session.accessToken = token.accessToken as string | undefined session.accessToken = token.accessToken as string | undefined
return session return session
@@ -0,0 +1,168 @@
'use client';
import { useState, useEffect } from 'react';
import { useSession } from 'next-auth/react';
type Club = {
id: string;
name: string;
sportType: string;
description?: string;
};
export function ClubManagement() {
const { data: session } = useSession();
const [clubs, setClubs] = useState<Club[]>([]);
const [loading, setLoading] = useState(true);
const [isCreating, setIsCreating] = useState(false);
const [newClub, setNewClub] = useState({ name: '', sportType: 'Tennis', description: '' });
useEffect(() => {
const fetchClubsLocally = async () => {
try {
const res = await fetch(`${process.env.NEXT_PUBLIC_API_URL}/api/admin/clubs`, {
headers: { Authorization: `Bearer ${session?.accessToken}` },
});
if (res.ok) {
const data = await res.json();
setClubs(data);
}
} catch (error) {
console.error('Failed to fetch clubs', error);
} finally {
setLoading(false);
}
};
if (session) fetchClubsLocally();
}, [session]);
const fetchClubs = async () => {
try {
const res = await fetch(`${process.env.NEXT_PUBLIC_API_URL}/api/admin/clubs`, {
headers: { Authorization: `Bearer ${session?.accessToken}` },
});
if (res.ok) {
const data = await res.json();
setClubs(data);
}
} catch (error) {
console.error('Failed to fetch clubs', error);
} finally {
setLoading(false);
}
};
const handleCreate = async (e: React.FormEvent) => {
e.preventDefault();
try {
const res = await fetch(`${process.env.NEXT_PUBLIC_API_URL}/api/admin/clubs`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${session?.accessToken}`,
},
body: JSON.stringify({
name: newClub.name,
sportType: newClub.sportType === 'Tennis' ? 0 : 1, // Mapping Enum or keep string if api accepts
description: newClub.description,
}),
});
if (res.ok) {
setNewClub({ name: '', sportType: 'Tennis', description: '' });
setIsCreating(false);
fetchClubs();
}
} catch (e) {
console.error(e);
}
};
const handleDelete = async (id: string) => {
if (!confirm('Are you sure you want to delete this club?')) return;
try {
const res = await fetch(`${process.env.NEXT_PUBLIC_API_URL}/api/admin/clubs/${id}`, {
method: 'DELETE',
headers: { Authorization: `Bearer ${session?.accessToken}` },
});
if (res.ok) {
fetchClubs();
}
} catch (e) {
console.error(e);
}
};
if (loading) return <div>Loading clubs...</div>;
return (
<div className="space-y-6">
<div className="flex justify-between">
<h2 className="text-xl font-semibold">All Clubs</h2>
<button
onClick={() => setIsCreating(true)}
className="bg-blue-600 text-white px-4 py-2 rounded shadow hover:bg-blue-700"
>
Create New Club
</button>
</div>
{isCreating && (
<form onSubmit={handleCreate} className="bg-white p-4 rounded shadow space-y-4 border">
<h3 className="font-semibold text-lg">New Club</h3>
<div>
<label className="block text-sm font-medium">Name</label>
<input
required
className="mt-1 block w-full p-2 border rounded"
value={newClub.name}
onChange={e => setNewClub({ ...newClub, name: e.target.value })}
/>
</div>
<div>
<label className="block text-sm font-medium">Sport Type</label>
<select
className="mt-1 block w-full p-2 border rounded"
value={newClub.sportType}
onChange={e => setNewClub({ ...newClub, sportType: e.target.value })}
>
<option value="Tennis">Tennis</option>
<option value="Cycling">Cycling</option>
</select>
</div>
<div>
<label className="block text-sm font-medium">Description</label>
<textarea
className="mt-1 block w-full p-2 border rounded"
value={newClub.description}
onChange={e => setNewClub({ ...newClub, description: e.target.value })}
/>
</div>
<div className="flex gap-2">
<button type="submit" className="bg-blue-600 text-white px-4 py-2 rounded hover:bg-blue-700">Save</button>
<button type="button" onClick={() => setIsCreating(false)} className="px-4 py-2 border rounded hover:bg-gray-50">Cancel</button>
</div>
</form>
)}
<div className="grid gap-4 md:grid-cols-2 lg:grid-cols-3">
{clubs.map(club => (
<div key={club.id} className="bg-white p-4 rounded shadow border">
<h3 className="font-bold text-lg">{club.name}</h3>
<p className="text-sm text-gray-500 mb-2">{club.sportType}</p>
<p className="text-sm line-clamp-2 mb-4">{club.description || 'No description'}</p>
<div className="flex justify-end gap-2">
<button
onClick={() => handleDelete(club.id)}
className="text-red-600 hover:text-red-800 text-sm font-medium"
>
Delete
</button>
</div>
</div>
))}
{clubs.length === 0 && <p className="text-gray-500 col-span-full">No clubs found.</p>}
</div>
</div>
);
}
+19 -5
View File
@@ -6,7 +6,7 @@ import { ReactNode, useEffect } from 'react';
import { useTenant } from '../contexts/tenant-context'; import { useTenant } from '../contexts/tenant-context';
export function AuthGuard({ children }: { children: ReactNode }) { export function AuthGuard({ children }: { children: ReactNode }) {
const { status } = useSession(); const { data, status } = useSession();
const { activeClubId, clubs, setActiveClub, clubsLoading } = useTenant(); const { activeClubId, clubs, setActiveClub, clubsLoading } = useTenant();
const router = useRouter(); const router = useRouter();
@@ -17,14 +17,27 @@ export function AuthGuard({ children }: { children: ReactNode }) {
}, [status, router]); }, [status, router]);
useEffect(() => { useEffect(() => {
if (status === 'authenticated' && clubs.length > 0) { if (status === 'authenticated') {
const isAdmin = data?.user?.isAdmin;
// Admin routing
if (isAdmin) {
if (!window.location.pathname.startsWith('/admin')) {
router.push('/admin/clubs');
}
return;
}
// Normal user routing
if (clubs.length > 0) {
if (clubs.length === 1 && !activeClubId) { if (clubs.length === 1 && !activeClubId) {
setActiveClub(clubs[0].id); setActiveClub(clubs[0].id);
} else if (clubs.length > 1 && !activeClubId) { } else if (clubs.length > 1 && !activeClubId) {
router.push('/select-club'); router.push('/select-club');
} }
} }
}, [status, clubs, activeClubId, router, setActiveClub]); }
}, [status, clubs, activeClubId, router, setActiveClub, data]);
if (status === 'loading') { if (status === 'loading') {
return ( return (
@@ -46,7 +59,8 @@ export function AuthGuard({ children }: { children: ReactNode }) {
); );
} }
if (clubs.length === 0 && status === 'authenticated') { const isAdmin = data?.user?.isAdmin;
if (clubs.length === 0 && status === 'authenticated' && !isAdmin) {
const handleSwitchAccount = () => { const handleSwitchAccount = () => {
const keycloakLogoutUrl = `${process.env.NEXT_PUBLIC_KEYCLOAK_ISSUER || 'http://localhost:8080/realms/workclub'}/protocol/openid-connect/logout?redirect_uri=${encodeURIComponent(window.location.origin + '/login')}`; const keycloakLogoutUrl = `${process.env.NEXT_PUBLIC_KEYCLOAK_ISSUER || 'http://localhost:8080/realms/workclub'}/protocol/openid-connect/logout?redirect_uri=${encodeURIComponent(window.location.origin + '/login')}`;
signOut({ redirect: false }).then(() => { signOut({ redirect: false }).then(() => {
@@ -68,7 +82,7 @@ export function AuthGuard({ children }: { children: ReactNode }) {
); );
} }
if (clubs.length > 1 && !activeClubId) { if (clubs.length > 1 && !activeClubId && !isAdmin) {
return null; return null;
} }
Binary file not shown.

Before

Width:  |  Height:  |  Size: 121 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 126 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 170 KiB

+14 -9
View File
@@ -18,7 +18,7 @@ spec:
spec: spec:
containers: containers:
- name: api - name: api
image: workclub-api:latest image: 192.168.241.13:8080/workclub-api:latest
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
ports: ports:
- name: http - name: http
@@ -28,10 +28,10 @@ spec:
httpGet: httpGet:
path: /health/startup path: /health/startup
port: http port: http
initialDelaySeconds: 5 initialDelaySeconds: 10
periodSeconds: 10 periodSeconds: 10
timeoutSeconds: 5 timeoutSeconds: 5
failureThreshold: 30 failureThreshold: 60
livenessProbe: livenessProbe:
httpGet: httpGet:
path: /health/live path: /health/live
@@ -44,10 +44,10 @@ spec:
httpGet: httpGet:
path: /health/ready path: /health/ready
port: http port: http
initialDelaySeconds: 5 initialDelaySeconds: 60
periodSeconds: 10 periodSeconds: 15
timeoutSeconds: 5 timeoutSeconds: 5
failureThreshold: 2 failureThreshold: 10
resources: resources:
requests: requests:
@@ -55,7 +55,7 @@ spec:
memory: 256Mi memory: 256Mi
limits: limits:
cpu: 500m cpu: 500m
memory: 512Mi memory: 768Mi
env: env:
- name: ASPNETCORE_ENVIRONMENT - name: ASPNETCORE_ENVIRONMENT
@@ -67,8 +67,13 @@ spec:
secretKeyRef: secretKeyRef:
name: workclub-secrets name: workclub-secrets
key: database-connection-string key: database-connection-string
- name: Keycloak__Url - name: Keycloak__Authority
valueFrom: valueFrom:
configMapKeyRef: configMapKeyRef:
name: workclub-config name: workclub-config
key: keycloak-url key: keycloak-authority
- name: Keycloak__Audience
valueFrom:
configMapKeyRef:
name: workclub-config
key: keycloak-audience
+2 -1
View File
@@ -6,11 +6,12 @@ metadata:
app: workclub-api app: workclub-api
component: backend component: backend
spec: spec:
type: ClusterIP type: NodePort
selector: selector:
app: workclub-api app: workclub-api
ports: ports:
- name: http - name: http
port: 80 port: 80
targetPort: 8080 targetPort: 8080
nodePort: 30081
protocol: TCP protocol: TCP
+20 -3
View File
@@ -6,9 +6,11 @@ metadata:
app: workclub app: workclub
data: data:
log-level: "Information" log-level: "Information"
cors-origins: "http://localhost:3000" cors-origins: "http://localhost:3000,http://192.168.240.200:30080"
api-base-url: "http://workclub-api" api-base-url: "http://192.168.240.200:30081"
keycloak-url: "http://workclub-keycloak" keycloak-url: "http://192.168.240.200:30082"
keycloak-authority: "http://192.168.240.200:30082/realms/workclub"
keycloak-audience: "workclub-api"
keycloak-realm: "workclub" keycloak-realm: "workclub"
# Database configuration # Database configuration
@@ -39,3 +41,18 @@ data:
\c workclub \c workclub
GRANT ALL PRIVILEGES ON SCHEMA public TO app; GRANT ALL PRIVILEGES ON SCHEMA public TO app;
ALTER SCHEMA public OWNER TO app; ALTER SCHEMA public OWNER TO app;
-- App admin role for RLS bypass policies used by API startup seed
DO $$
BEGIN
IF NOT EXISTS (SELECT 1 FROM pg_roles WHERE rolname = 'app_admin') THEN
CREATE ROLE app_admin;
END IF;
END
$$;
GRANT app_admin TO app WITH INHERIT FALSE, SET TRUE;
GRANT USAGE ON SCHEMA public TO app_admin;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO app_admin;
GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA public TO app_admin;
ALTER DEFAULT PRIVILEGES FOR ROLE app IN SCHEMA public GRANT ALL ON TABLES TO app_admin;
ALTER DEFAULT PRIVILEGES FOR ROLE app IN SCHEMA public GRANT ALL ON SEQUENCES TO app_admin;
+29 -1
View File
@@ -18,7 +18,7 @@ spec:
spec: spec:
containers: containers:
- name: frontend - name: frontend
image: workclub-frontend:latest image: 192.168.241.13:8080/workclub-frontend:latest
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
ports: ports:
- name: http - name: http
@@ -62,3 +62,31 @@ spec:
configMapKeyRef: configMapKeyRef:
name: workclub-config name: workclub-config
key: keycloak-url key: keycloak-url
- name: NEXT_PUBLIC_KEYCLOAK_ISSUER
valueFrom:
configMapKeyRef:
name: workclub-config
key: keycloak-authority
- name: NEXTAUTH_URL
value: "http://192.168.240.200:30080"
- name: AUTH_TRUST_HOST
value: "true"
- name: NEXTAUTH_SECRET
valueFrom:
secretKeyRef:
name: workclub-secrets
key: nextauth-secret
- name: KEYCLOAK_CLIENT_ID
value: "workclub-app"
- name: KEYCLOAK_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: workclub-secrets
key: keycloak-client-secret
- name: KEYCLOAK_ISSUER
valueFrom:
configMapKeyRef:
name: workclub-config
key: keycloak-authority
- name: KEYCLOAK_ISSUER_INTERNAL
value: "http://workclub-keycloak/realms/workclub"
+2 -1
View File
@@ -6,11 +6,12 @@ metadata:
app: workclub-frontend app: workclub-frontend
component: frontend component: frontend
spec: spec:
type: ClusterIP type: NodePort
selector: selector:
app: workclub-frontend app: workclub-frontend
ports: ports:
- name: http - name: http
port: 80 port: 80
targetPort: 3000 targetPort: 3000
nodePort: 30080
protocol: TCP protocol: TCP
+42 -14
View File
@@ -7,6 +7,9 @@ metadata:
component: auth component: auth
spec: spec:
replicas: 1 replicas: 1
strategy:
type: Recreate
progressDeadlineSeconds: 1800
selector: selector:
matchLabels: matchLabels:
app: workclub-keycloak app: workclub-keycloak
@@ -20,36 +23,48 @@ spec:
- name: keycloak - name: keycloak
image: quay.io/keycloak/keycloak:26.1 image: quay.io/keycloak/keycloak:26.1
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
command: args:
- start - start-dev
- --import-realm
ports: ports:
- name: http - name: http
containerPort: 8080 containerPort: 8080
protocol: TCP protocol: TCP
- name: management
containerPort: 9000
protocol: TCP
readinessProbe: readinessProbe:
httpGet: httpGet:
path: /health/ready path: /health/ready
port: http port: management
initialDelaySeconds: 10 initialDelaySeconds: 240
periodSeconds: 10 periodSeconds: 15
timeoutSeconds: 5 timeoutSeconds: 5
failureThreshold: 2 failureThreshold: 10
startupProbe:
httpGet:
path: /health/ready
port: management
initialDelaySeconds: 60
periodSeconds: 15
timeoutSeconds: 5
failureThreshold: 120
livenessProbe: livenessProbe:
httpGet: httpGet:
path: /health/live path: /health/live
port: http port: management
initialDelaySeconds: 20 initialDelaySeconds: 420
periodSeconds: 15 periodSeconds: 20
timeoutSeconds: 5 timeoutSeconds: 5
failureThreshold: 3 failureThreshold: 5
resources: resources:
requests: requests:
cpu: 100m cpu: 100m
memory: 256Mi memory: 256Mi
limits: limits:
cpu: 500m cpu: 500m
memory: 512Mi memory: 1024Mi
env: env:
- name: KC_DB - name: KC_DB
value: postgres value: postgres
@@ -66,9 +81,12 @@ spec:
secretKeyRef: secretKeyRef:
name: workclub-secrets name: workclub-secrets
key: keycloak-db-password key: keycloak-db-password
- name: KEYCLOAK_ADMIN - name: KC_BOOTSTRAP_ADMIN_USERNAME
value: admin valueFrom:
- name: KEYCLOAK_ADMIN_PASSWORD secretKeyRef:
name: workclub-secrets
key: keycloak-admin-username
- name: KC_BOOTSTRAP_ADMIN_PASSWORD
valueFrom: valueFrom:
secretKeyRef: secretKeyRef:
name: workclub-secrets name: workclub-secrets
@@ -79,3 +97,13 @@ spec:
value: "edge" value: "edge"
- name: KC_HTTP_ENABLED - name: KC_HTTP_ENABLED
value: "true" value: "true"
- name: KC_HEALTH_ENABLED
value: "true"
volumeMounts:
- name: keycloak-realm-import
mountPath: /opt/keycloak/data/import
readOnly: true
volumes:
- name: keycloak-realm-import
configMap:
name: keycloak-realm-import
@@ -0,0 +1,248 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: keycloak-realm-import
labels:
app: workclub-keycloak
data:
realm-export.json: |
{
"realm": "workclub",
"enabled": true,
"displayName": "Work Club Manager",
"registrationAllowed": false,
"rememberMe": true,
"verifyEmail": false,
"loginWithEmailAllowed": true,
"duplicateEmailsAllowed": false,
"resetPasswordAllowed": true,
"editUsernameAllowed": false,
"bruteForceProtected": true,
"clients": [
{
"clientId": "workclub-api",
"name": "Work Club API",
"enabled": true,
"protocol": "openid-connect",
"clientAuthenticatorType": "client-secret",
"secret": "dev-secret-workclub-api-change-in-production",
"redirectUris": [],
"webOrigins": [],
"publicClient": false,
"directAccessGrantsEnabled": false,
"serviceAccountsEnabled": false,
"standardFlowEnabled": false,
"implicitFlowEnabled": false,
"fullScopeAllowed": true,
"protocolMappers": [
{
"name": "audience-workclub-api",
"protocol": "openid-connect",
"protocolMapper": "oidc-audience-mapper",
"consentRequired": false,
"config": {
"included.client.audience": "workclub-api",
"id.token.claim": "false",
"access.token.claim": "true"
}
},
{
"name": "clubs-claim",
"protocol": "openid-connect",
"protocolMapper": "oidc-usermodel-attribute-mapper",
"consentRequired": false,
"config": {
"user.attribute": "clubs",
"claim.name": "clubs",
"jsonType.label": "String",
"id.token.claim": "true",
"access.token.claim": "true",
"userinfo.token.claim": "true"
}
}
]
},
{
"clientId": "workclub-app",
"name": "Work Club Frontend",
"enabled": true,
"protocol": "openid-connect",
"publicClient": true,
"redirectUris": [
"http://localhost:3000/*",
"http://localhost:3001/*",
"http://workclub-frontend/*",
"http://192.168.240.200:30080/*"
],
"webOrigins": [
"http://localhost:3000",
"http://localhost:3001",
"http://workclub-frontend",
"http://192.168.240.200:30080"
],
"directAccessGrantsEnabled": true,
"standardFlowEnabled": true,
"implicitFlowEnabled": false,
"fullScopeAllowed": true,
"protocolMappers": [
{
"name": "audience-workclub-api",
"protocol": "openid-connect",
"protocolMapper": "oidc-audience-mapper",
"consentRequired": false,
"config": {
"included.client.audience": "workclub-api",
"id.token.claim": "false",
"access.token.claim": "true"
}
},
{
"name": "clubs-claim",
"protocol": "openid-connect",
"protocolMapper": "oidc-usermodel-attribute-mapper",
"consentRequired": false,
"config": {
"user.attribute": "clubs",
"claim.name": "clubs",
"jsonType.label": "String",
"id.token.claim": "true",
"access.token.claim": "true",
"userinfo.token.claim": "true"
}
}
]
}
],
"roles": {
"realm": [
{
"name": "admin",
"description": "Club admin"
},
{
"name": "manager",
"description": "Club manager"
},
{
"name": "member",
"description": "Club member"
},
{
"name": "viewer",
"description": "Club viewer"
}
]
},
"users": [
{
"username": "admin@test.com",
"enabled": true,
"email": "admin@test.com",
"firstName": "Admin",
"lastName": "User",
"credentials": [
{
"type": "password",
"value": "testpass123",
"temporary": false
}
],
"realmRoles": [
"admin"
],
"attributes": {
"clubs": [
"64e05b5e-ef45-81d7-f2e8-3d14bd197383,Admin,3b4afcfa-1352-8fc7-b497-8ab52a0d5fda,Member"
]
}
},
{
"username": "manager@test.com",
"enabled": true,
"email": "manager@test.com",
"firstName": "Manager",
"lastName": "User",
"credentials": [
{
"type": "password",
"value": "testpass123",
"temporary": false
}
],
"realmRoles": [
"manager"
],
"attributes": {
"clubs": [
"64e05b5e-ef45-81d7-f2e8-3d14bd197383,Manager"
]
}
},
{
"username": "member1@test.com",
"enabled": true,
"email": "member1@test.com",
"firstName": "Member",
"lastName": "One",
"credentials": [
{
"type": "password",
"value": "testpass123",
"temporary": false
}
],
"realmRoles": [
"member"
],
"attributes": {
"clubs": [
"64e05b5e-ef45-81d7-f2e8-3d14bd197383,Member,3b4afcfa-1352-8fc7-b497-8ab52a0d5fda,Member"
]
}
},
{
"username": "member2@test.com",
"enabled": true,
"email": "member2@test.com",
"firstName": "Member",
"lastName": "Two",
"credentials": [
{
"type": "password",
"value": "testpass123",
"temporary": false
}
],
"realmRoles": [
"member"
],
"attributes": {
"clubs": [
"64e05b5e-ef45-81d7-f2e8-3d14bd197383,Member"
]
}
},
{
"username": "viewer@test.com",
"enabled": true,
"email": "viewer@test.com",
"firstName": "Viewer",
"lastName": "User",
"credentials": [
{
"type": "password",
"value": "testpass123",
"temporary": false
}
],
"realmRoles": [
"viewer"
],
"attributes": {
"clubs": [
"64e05b5e-ef45-81d7-f2e8-3d14bd197383,Viewer"
]
}
}
]
}
+2 -1
View File
@@ -6,11 +6,12 @@ metadata:
app: workclub-keycloak app: workclub-keycloak
component: auth component: auth
spec: spec:
type: ClusterIP type: NodePort
selector: selector:
app: workclub-keycloak app: workclub-keycloak
ports: ports:
- name: http - name: http
port: 80 port: 80
targetPort: 8080 targetPort: 8080
nodePort: 30082
protocol: TCP protocol: TCP
+4
View File
@@ -9,6 +9,10 @@ resources:
- postgres-statefulset.yaml - postgres-statefulset.yaml
- postgres-service.yaml - postgres-service.yaml
- keycloak-deployment.yaml - keycloak-deployment.yaml
- keycloak-realm-import-configmap.yaml
- keycloak-service.yaml - keycloak-service.yaml
- configmap.yaml - configmap.yaml
- ingress.yaml - ingress.yaml
generatorOptions:
disableNameSuffixHash: true
+9 -2
View File
@@ -3,6 +3,7 @@ kind: Kustomization
resources: resources:
- ../../base - ../../base
- secrets.yaml
namespace: workclub-dev namespace: workclub-dev
@@ -10,9 +11,11 @@ commonLabels:
environment: development environment: development
images: images:
- name: workclub-api - name: 192.168.241.13:8080/workclub-api
newName: 192.168.241.13:8080/workclub-api
newTag: dev newTag: dev
- name: workclub-frontend - name: 192.168.241.13:8080/workclub-frontend
newName: 192.168.241.13:8080/workclub-frontend
newTag: dev newTag: dev
replicas: replicas:
@@ -30,3 +33,7 @@ patches:
target: target:
kind: Deployment kind: Deployment
name: workclub-frontend name: workclub-frontend
- path: patches/postgres-patch.yaml
target:
kind: StatefulSet
name: workclub-postgres
@@ -0,0 +1,11 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: workclub-postgres
spec:
template:
spec:
volumes:
- name: postgres-data
emptyDir: {}
volumeClaimTemplates: [] # This removes the VCT from the base
+13
View File
@@ -0,0 +1,13 @@
apiVersion: v1
kind: Secret
metadata:
name: workclub-secrets
type: Opaque
stringData:
database-connection-string: "Host=workclub-postgres;Database=workclub;Username=app;Password=devpassword"
postgres-password: "devpassword"
keycloak-db-password: "keycloakpass"
keycloak-admin-username: "admin"
keycloak-admin-password: "adminpassword"
keycloak-client-secret: "dev-secret-workclub-api-change-in-production"
nextauth-secret: "dev-secret-change-in-production-use-openssl-rand-base64-32"
+23 -7
View File
@@ -162,7 +162,7 @@
"firstName": "Admin", "firstName": "Admin",
"lastName": "User", "lastName": "User",
"attributes": { "attributes": {
"clubs": ["64e05b5e-ef45-81d7-f2e8-3d14bd197383,3b4afcfa-1352-8fc7-b497-8ab52a0d5fda"] "clubs": []
}, },
"credentials": [ "credentials": [
{ {
@@ -171,7 +171,10 @@
"temporary": false "temporary": false
} }
], ],
"requiredActions": [] "requiredActions": [],
"realmRoles": [
"admin"
]
}, },
{ {
"username": "manager@test.com", "username": "manager@test.com",
@@ -181,7 +184,9 @@
"firstName": "Manager", "firstName": "Manager",
"lastName": "User", "lastName": "User",
"attributes": { "attributes": {
"clubs": ["64e05b5e-ef45-81d7-f2e8-3d14bd197383"] "clubs": [
"64e05b5e-ef45-81d7-f2e8-3d14bd197383"
]
}, },
"credentials": [ "credentials": [
{ {
@@ -200,7 +205,9 @@
"firstName": "Member", "firstName": "Member",
"lastName": "One", "lastName": "One",
"attributes": { "attributes": {
"clubs": ["64e05b5e-ef45-81d7-f2e8-3d14bd197383,3b4afcfa-1352-8fc7-b497-8ab52a0d5fda"] "clubs": [
"64e05b5e-ef45-81d7-f2e8-3d14bd197383,3b4afcfa-1352-8fc7-b497-8ab52a0d5fda"
]
}, },
"credentials": [ "credentials": [
{ {
@@ -219,7 +226,9 @@
"firstName": "Member", "firstName": "Member",
"lastName": "Two", "lastName": "Two",
"attributes": { "attributes": {
"clubs": ["64e05b5e-ef45-81d7-f2e8-3d14bd197383"] "clubs": [
"64e05b5e-ef45-81d7-f2e8-3d14bd197383"
]
}, },
"credentials": [ "credentials": [
{ {
@@ -238,7 +247,9 @@
"firstName": "Viewer", "firstName": "Viewer",
"lastName": "User", "lastName": "User",
"attributes": { "attributes": {
"clubs": ["64e05b5e-ef45-81d7-f2e8-3d14bd197383"] "clubs": [
"64e05b5e-ef45-81d7-f2e8-3d14bd197383"
]
}, },
"credentials": [ "credentials": [
{ {
@@ -251,7 +262,12 @@
} }
], ],
"roles": { "roles": {
"realm": [], "realm": [
{
"name": "admin",
"description": "System Admin"
}
],
"client": {} "client": {}
}, },
"groups": [], "groups": [],
+20
View File
@@ -0,0 +1,20 @@
schema: spec-driven
# Project context (optional)
# This is shown to AI when creating artifacts.
# Add your tech stack, conventions, style guides, domain knowledge, etc.
# Example:
# context: |
# Tech stack: TypeScript, React, Node.js
# We use conventional commits
# Domain: e-commerce platform
# Per-artifact rules (optional)
# Add custom rules for specific artifacts.
# Example:
# rules:
# proposal:
# - Keep proposals under 500 words
# - Always include a "Non-goals" section
# tasks:
# - Break tasks into chunks of max 2 hours