Merge remote-tracking branch 'origin/main'

This commit is contained in:
Peter Steinberger
2026-02-14 13:30:22 +01:00
80 changed files with 2208 additions and 556 deletions

View File

@@ -1,22 +1,18 @@
# PR Workflow for Maintainers
# PR Review Instructions
Please read this in full and do not skip sections.
This is the single source of truth for the maintainer PR workflow.
## Triage order
Process PRs **oldest to newest**. Older PRs are more likely to have merge conflicts and stale dependencies; resolving them first keeps the queue healthy and avoids snowballing rebase pain.
## Working rule
Skills execute workflow. Maintainers provide judgment.
Skills execute workflow, maintainers provide judgment.
Always pause between skills to evaluate technical direction, not just command success.
Default mode is local-first, do not write to GitHub until maintainer explicitly says go.
These three skills must be used in order:
1. `review-pr` — review only, produce findings
2. `prepare-pr` — rebase, fix, gate, push to PR head branch
3. `merge-pr` — squash-merge, verify MERGED state, clean up
1. `review-pr`
2. `prepare-pr`
3. `merge-pr`
They are necessary, but not sufficient. Maintainers must steer between steps and understand the code before moving forward.
@@ -25,64 +21,26 @@ If submitted code is low quality, ignore it and implement the best solution for
Do not continue if you cannot verify the problem is real or test the fix.
## Script-first contract
## Remote write policy
Skill runs should invoke these wrappers automatically. You only need to run them manually when debugging or doing an explicit script-only run:
Until the maintainer explicitly approves remote actions, stay local-only.
- `scripts/pr-review <PR>`
- `scripts/pr review-checkout-main <PR>` or `scripts/pr review-checkout-pr <PR>` while reviewing
- `scripts/pr review-guard <PR>` before writing review outputs
- `scripts/pr review-validate-artifacts <PR>` after writing outputs
- `scripts/pr-prepare init <PR>`
- `scripts/pr-prepare validate-commit <PR>`
- `scripts/pr-prepare gates <PR>`
- `scripts/pr-prepare push <PR>`
- Optional one-shot prepare: `scripts/pr-prepare run <PR>`
- `scripts/pr-merge <PR>` (verify-only; short form remains backward compatible)
- `scripts/pr-merge verify <PR>` (verify-only)
- Optional one-shot merge: `scripts/pr-merge run <PR>`
Remote actions include:
These wrappers run shared preflight checks and generate deterministic artifacts. They are designed to work from repo root or PR worktree cwd.
- Pushing branches.
- Posting PR comments.
- Editing PR metadata (labels, assignees, state).
- Merging PRs.
- Editing advisory state or publishing advisories.
## Required artifacts
Allowed before approval:
- `.local/pr-meta.json` and `.local/pr-meta.env` from review init.
- `.local/review.md` and `.local/review.json` from review output.
- `.local/prep-context.env` and `.local/prep.md` from prepare.
- `.local/prep.env` from prepare completion.
- Local code changes.
- Local tests and validation.
- Drafting copy for PR/advisory comments.
- Read-only `gh` commands.
## Structured review handoff
`review-pr` must write `.local/review.json`.
In normal skill runs this is handled automatically. Use `scripts/pr review-artifacts-init <PR>` and `scripts/pr review-tests <PR> ...` manually only for debugging or explicit script-only runs.
Minimum schema:
```json
{
"recommendation": "READY FOR /prepare-pr",
"findings": [
{
"id": "F1",
"severity": "IMPORTANT",
"title": "Missing changelog entry",
"area": "CHANGELOG.md",
"fix": "Add a Fixes entry for PR #<PR>"
}
],
"tests": {
"ran": ["pnpm test -- ..."],
"gaps": ["..."],
"result": "pass"
}
}
```
`prepare-pr` resolves all `BLOCKER` and `IMPORTANT` findings from this file.
## Coding Agent
Use ChatGPT 5.3 Codex High. Fall back to 5.2 Codex High or 5.3 Codex Medium if necessary.
When approved, perform only the approved remote action, then pause for next instruction.
## PR quality bar
@@ -95,60 +53,6 @@ Use ChatGPT 5.3 Codex High. Fall back to 5.2 Codex High or 5.3 Codex Medium if n
- Harden changes. Always evaluate security impact and abuse paths.
- Understand the system before changing it. Never make the codebase messier just to clear a PR queue.
## Rebase and conflict resolution
Before any substantive review or prep work, **always rebase the PR branch onto current `main` and resolve merge conflicts first**. A PR that cannot cleanly rebase is not ready for review — fix conflicts before evaluating correctness.
- During `prepare-pr`: rebase onto `main` as the first step, before fixing findings or running gates.
- If conflicts are complex or touch areas you do not understand, stop and escalate.
- Prefer **rebase** for linear history; **squash** when commit history is messy or unhelpful.
## Commit and changelog rules
- In normal `prepare-pr` runs, commits are created via `scripts/committer "<msg>" <file...>`. Use it manually only when operating outside the skill flow; avoid manual `git add`/`git commit` so staging stays scoped.
- Follow concise, action-oriented commit messages (e.g., `CLI: add verbose flag to send`).
- During `prepare-pr`, use concise, action-oriented subjects **without** PR numbers or thanks; reserve `(#<PR>) thanks @<pr-author>` for the final merge/squash commit.
- Group related changes; avoid bundling unrelated refactors.
- Changelog workflow: keep the latest released version at the top (no `Unreleased`); after publishing, bump the version and start a new top section.
- When working on a PR: add a changelog entry with the PR number and thank the contributor (mandatory in this workflow).
- When working on an issue: reference the issue in the changelog entry.
- In this workflow, changelog is always required even for internal/test-only changes.
## Gate policy
In fresh worktrees, dependency bootstrap is handled by wrappers before local gates. Manual equivalent:
```sh
pnpm install --frozen-lockfile
```
Gate set:
- Always: `pnpm build`, `pnpm check`
- `pnpm test` required unless high-confidence docs-only criteria pass.
## Co-contributor and clawtributors
- If we squash, add the PR author as a co-contributor in the commit body using a `Co-authored-by:` trailer.
- When maintainer prepares and merges the PR, add the maintainer as an additional `Co-authored-by:` trailer too.
- Avoid `--auto` merges for maintainer landings. Merge only after checks are green so the maintainer account is the actor and attribution is deterministic.
- For squash merges, set `--author-email` to a reviewer-owned email with fallback candidates; if merge fails due to author-email validation, retry once with the next candidate.
- If you review a PR and later do work on it, land via merge/squash (no direct-main commits) and always add the PR author as a co-contributor.
- When merging a PR: leave a PR comment that explains exactly what we did, include the SHA hashes, and record the comment URL in the final report.
- Manual post-merge step for new contributors: run `bun scripts/update-clawtributors.ts` to add their avatar to the README "Thanks to all clawtributors" list, then commit the regenerated README.
## Review mode vs landing mode
- **Review mode (PR link only):** read `gh pr view`/`gh pr diff`; **do not** switch branches; **do not** change code.
- **Landing mode (exception path):** use only when normal `review-pr -> prepare-pr -> merge-pr` flow cannot safely preserve attribution or cannot satisfy branch protection. Create an integration branch from `main`, bring in PR commits (**prefer rebase** for linear history; **merge allowed** when complexity/conflicts make it safer), apply fixes, add changelog (+ thanks + PR #), run full gate **locally before committing** (`pnpm build && pnpm check && pnpm test`), commit, merge back to `main`, then `git switch main` (never stay on a topic branch after landing). Important: the contributor needs to be in the git graph after this!
## Pre-review safety checks
- Before starting a review when a GH Issue/PR is pasted: `review-pr`/`scripts/pr-review` should create and use an isolated `.worktrees/pr-<PR>` checkout from `origin/main` automatically. Do not require a clean main checkout, and do not run `git pull` in a dirty main checkout.
- PR review calls: prefer a single `gh pr view --json ...` to batch metadata/comments; run `gh pr diff` only when needed.
- PRs should summarize scope, note testing performed, and mention any user-facing changes or new flags.
- Read `docs/help/submitting-a-pr.md` ([Submitting a PR](https://docs.openclaw.ai/help/submitting-a-pr)) for what we expect from contributors.
## Unified workflow
Entry criteria:
@@ -174,6 +78,7 @@ Maintainer checkpoint before `prepare-pr`:
```
What problem are they trying to solve?
What is the most optimal implementation?
Is the code properly scoped?
Can we fix up everything?
Do we have any questions?
```
@@ -189,30 +94,27 @@ Stop and escalate instead of continuing if:
Purpose:
- Make the PR merge-ready on its head branch.
- Rebase onto current `main` first, then fix blocker/important findings, then run gates.
- In fresh worktrees, bootstrap dependencies before local gates (`pnpm install --frozen-lockfile`).
- Rebase onto current `main`, fix blocker/important findings, and run gates.
Expected output:
- Updated code and tests on the PR head branch.
- `.local/prep.md` with changes, verification, and current HEAD SHA.
- Final status: `PR is ready for /merge-pr`.
- Final status: `PR is ready for /mergepr`.
Maintainer checkpoint before `merge-pr`:
```
Is this the most optimal implementation?
Is the code properly scoped?
Is the code properly reusing existing logic in the codebase?
Is the code properly typed?
Is the code hardened?
Do we have enough tests?
Do we need regression tests?
Are tests using fake timers where appropriate? (e.g., debounce/throttle, retry backoff, timeout branches, delayed callbacks, polling loops)
Are tests using fake timers where relevant? (e.g., debounce/throttle, retry backoff, timeout branches, delayed callbacks, polling loops)
Do not add performative tests, ensure tests are real and there are no regressions.
Take your time, fix it properly, refactor if necessary.
Do you see any follow-up refactors we should do?
Did any changes introduce any potential security vulnerabilities?
Take your time, fix it properly, refactor if necessary.
```
Stop and escalate instead of continuing if:
@@ -221,29 +123,59 @@ Stop and escalate instead of continuing if:
- Fixing findings requires broad architecture changes outside safe PR scope.
- Security hardening requirements remain unresolved.
### Security advisory companion flow
Use this for GHSA-linked fixes and private reports.
1. Implement and test the fix locally first, do not edit advisory content yet.
2. Land the code fix PR through normal flow, including attribution and changelog where needed.
3. Prepare public-safe advisory text:
- No internal workflow chatter.
- No unnecessary exploit detail.
- Clear impact, affected range, fixed range, remediation, credits.
4. In GitHub advisory UI, set package ranges in the structured fields:
- `Affected versions`: `< fixed_version`
- `Patched versions`: `>= fixed_version`
Do not rely on description text alone.
5. If collaborator can edit text but cannot change advisory state, hand off to a Publisher to move triage -> accepted draft -> publish.
6. Advisory comments are posted manually in UI when required by policy. Do not rely on `gh api` automation for advisory comments.
Maintainer checkpoint for security advisories:
- Is the rewrite public-safe and free of internal/process notes?
- Are affected and patched ranges correctly set in the advisory form fields?
- Are credits present and accurate?
- Do we have Publisher action if state controls are unavailable?
### 3) `merge-pr`
Purpose:
- Merge only after review and prep artifacts are present and checks are green.
- Use deterministic squash merge flow (`--match-head-commit` + explicit subject/body with co-author trailer), then verify the PR ends in `MERGED` state.
- If no required checks are configured on the PR, treat that as acceptable and continue after branch-up-to-date validation.
- Use squash merge flow and verify the PR ends in `MERGED` state.
Go or no-go checklist before merge:
- All BLOCKER and IMPORTANT findings are resolved.
- Verification is meaningful and regression risk is acceptably low.
- Changelog is updated (mandatory) and docs are updated when required.
- Docs and changelog are updated when required.
- Required CI checks are green and the branch is not behind `main`.
Expected output:
- Successful merge commit and recorded merge SHA.
- Worktree cleanup after successful merge.
- Comment on PR indicating merge was successful.
Maintainer checkpoint after merge:
- Were any refactors intentionally deferred and now need follow-up issue(s)?
- Did this reveal broader architecture or test gaps we should address?
- Run `bun scripts/update-clawtributors.ts` if the contributor is new.
## Chasing main mitigation
To reduce repeated "branch behind main" loops:
1. Keep prep and merge windows short.
2. Rebase/update once, as late as possible, right before final checks.
3. Avoid non-essential commits on the PR branch after checks start.
4. Prefer merge queue or auto-merge when available.

View File

@@ -1,99 +1,182 @@
---
name: merge-pr
description: Script-first deterministic squash merge with strict required-check gating, head-SHA pinning, and reliable attribution/commenting.
description: Merge a GitHub PR via squash after /preparepr. Use when asked to merge a ready PR. Do not push to main or modify code. Ensure the PR ends in MERGED state and clean up worktrees after success.
---
# Merge PR
## Overview
Merge a prepared PR only after deterministic validation.
Merge a prepared PR via `gh pr merge --squash` and clean up the worktree after success.
## Inputs
- Ask for PR number or URL.
- If missing, use `.local/prep.env` from the PR worktree.
- If missing, auto-detect from conversation.
- If ambiguous, ask.
## Safety
- Never use `gh pr merge --auto` in this flow.
- Never run `git push` directly.
- Require `--match-head-commit` during merge.
- Wrapper commands are cwd-agnostic; you can run them from repo root or inside the PR worktree.
- Use `gh pr merge --squash` as the only path to `main`.
- Do not run `git push` at all during merge.
- Do not run gateway stop commands. Do not kill processes. Do not touch port 18792.
- Do not execute merge or PR-comment GitHub write actions until maintainer explicitly approves.
## Execution Contract
## Execution Rule
1. Validate merge readiness:
- Execute the workflow. Do not stop after printing the TODO checklist.
- If delegating, require the delegate to run commands and capture outputs.
## Known Footguns
- If you see "fatal: not a git repository", you are in the wrong directory. Use `~/dev/openclaw` if available; otherwise ask user.
- Read `.local/review.md` and `.local/prep.md` in the worktree. Do not skip.
- Clean up the real worktree directory `.worktrees/pr-<PR>` only after a successful merge.
- Expect cleanup to remove `.local/` artifacts.
## Completion Criteria
- Ensure `gh pr merge` succeeds.
- Ensure PR state is `MERGED`, never `CLOSED`.
- Record the merge SHA.
- Run cleanup only after merge success.
## First: Create a TODO Checklist
Create a checklist of all merge steps, print it, then continue and execute the commands.
## Setup: Use a Worktree
Use an isolated worktree for all merge work.
```sh
scripts/pr-merge verify <PR>
cd ~/dev/openclaw
# Sanity: confirm you are in the repo
git rev-parse --show-toplevel
WORKTREE_DIR=".worktrees/pr-<PR>"
```
Backward-compatible verify form also works:
Run all commands inside the worktree directory.
## Load Local Artifacts (Mandatory)
Expect these files from earlier steps:
- `.local/review.md` from `/reviewpr`
- `.local/prep.md` from `/preparepr`
```sh
scripts/pr-merge <PR>
ls -la .local || true
if [ -f .local/review.md ]; then
echo "Found .local/review.md"
sed -n '1,120p' .local/review.md
else
echo "Missing .local/review.md. Stop and run /reviewpr, then /preparepr."
exit 1
fi
if [ -f .local/prep.md ]; then
echo "Found .local/prep.md"
sed -n '1,120p' .local/prep.md
else
echo "Missing .local/prep.md. Stop and run /preparepr first."
exit 1
fi
```
2. Run one-shot deterministic merge:
```sh
scripts/pr-merge run <PR>
```
3. Ensure output reports:
- `merge_sha=<sha>`
- `merge_author_email=<email>`
- `comment_url=<url>`
## Steps
1. Validate artifacts
1. Identify PR meta
```sh
require=(.local/review.md .local/review.json .local/prep.md .local/prep.env)
for f in "${require[@]}"; do
[ -s "$f" ] || { echo "Missing artifact: $f"; exit 1; }
done
gh pr view <PR> --json number,title,state,isDraft,author,headRefName,baseRefName,headRepository,body --jq '{number,title,state,isDraft,author:.author.login,head:.headRefName,base:.baseRefName,headRepo:.headRepository.nameWithOwner,body}'
contrib=$(gh pr view <PR> --json author --jq .author.login)
head=$(gh pr view <PR> --json headRefName --jq .headRefName)
head_repo_url=$(gh pr view <PR> --json headRepository --jq .headRepository.url)
```
2. Validate checks and branch status
2. Run sanity checks
Stop if any are true:
- PR is a draft.
- Required checks are failing.
- Branch is behind main.
```sh
scripts/pr-merge verify <PR>
source .local/prep.env
# Checks
gh pr checks <PR>
# Check behind main
git fetch origin main
git fetch origin pull/<PR>/head:pr-<PR>
git merge-base --is-ancestor origin/main pr-<PR> || echo "PR branch is behind main, run /preparepr"
```
`scripts/pr-merge` treats “no required checks configured” as acceptable (`[]`), but fails on any required `fail` or `pending`.
If anything is failing or behind, stop and say to run `/preparepr`.
3. Merge deterministically (wrapper-managed)
3. Merge PR and delete branch
If checks are still running, use `--auto` to queue the merge.
```sh
scripts/pr-merge run <PR>
# Check status first
check_status=$(gh pr checks <PR> 2>&1)
if echo "$check_status" | grep -q "pending\|queued"; then
echo "Checks still running, using --auto to queue merge"
gh pr merge <PR> --squash --delete-branch --auto
echo "Merge queued. Monitor with: gh pr checks <PR> --watch"
else
gh pr merge <PR> --squash --delete-branch
fi
```
`scripts/pr-merge run` performs:
Before running merge command, pause and ask for explicit maintainer go-ahead.
- deterministic squash merge pinned to `PREP_HEAD_SHA`
- reviewer merge author email selection with fallback candidates
- one retry only when merge fails due to author-email validation
- co-author trailers for PR author and reviewer
- post-merge verification of both co-author trailers on commit message
- PR comment retry (3 attempts), then comment URL extraction
- cleanup after confirmed `MERGED`
If merge fails, report the error and stop. Do not retry in a loop.
If the PR needs changes beyond what `/preparepr` already did, stop and say to run `/preparepr` again.
4. Manual fallback (only if wrapper is unavailable)
4. Get merge SHA
```sh
scripts/pr merge-run <PR>
merge_sha=$(gh pr view <PR> --json mergeCommit --jq '.mergeCommit.oid')
echo "merge_sha=$merge_sha"
```
5. Cleanup
5. Optional comment
Cleanup is handled by `run` after merge success.
Use a literal multiline string or heredoc for newlines.
```sh
gh pr comment <PR> --body "$(printf 'Merged via squash.\n\n- Merge commit: %s\n\nThanks @%s!\n' \"$merge_sha\" \"$contrib\")"
```
6. Verify PR state is MERGED
```sh
gh pr view <PR> --json state --jq .state
```
7. Clean up worktree only on success
Run cleanup only if step 6 returned `MERGED`.
```sh
cd ~/dev/openclaw
git worktree remove ".worktrees/pr-<PR>" --force
git branch -D temp/pr-<PR> 2>/dev/null || true
git branch -D pr-<PR> 2>/dev/null || true
```
## Guardrails
- End in `MERGED`, never `CLOSED`.
- Cleanup only after confirmed merge.
- Worktree only.
- Do not close PRs.
- End in MERGED state.
- Clean up only after merge success.
- Never push to main. Use `gh pr merge --squash` only.
- Do not run `git push` at all in this command.

View File

@@ -1,122 +1,251 @@
---
name: prepare-pr
description: Script-first PR preparation with structured findings resolution, deterministic push safety, and explicit gate execution.
description: Prepare a GitHub PR for merge by rebasing onto main, fixing review findings, running gates, committing fixes, and pushing to the PR head branch. Use after /reviewpr. Never merge or push to main.
---
# Prepare PR
## Overview
Prepare the PR head branch for merge after `/review-pr`.
Prepare a PR branch for merge with review fixes, green gates, and an updated head branch.
## Inputs
- Ask for PR number or URL.
- If missing, use `.local/pr-meta.env` if present in the PR worktree.
- If missing, auto-detect from conversation.
- If ambiguous, ask.
## Safety
- Never push to `main`.
- Only push to PR head with explicit `--force-with-lease` against known head SHA.
- Never push to `main` or `origin/main`. Push only to the PR head branch.
- Never run `git push` without specifying remote and branch explicitly. Do not run bare `git push`.
- Do not run gateway stop commands. Do not kill processes. Do not touch port 18792.
- Do not run `git clean -fdx`.
- Wrappers are cwd-agnostic; run from repo root or PR worktree.
- Do not run `git add -A` or `git add .`. Stage only specific files changed.
- Do not push to GitHub until the maintainer explicitly approves the push step.
## Execution Contract
## Execution Rule
1. Run setup:
- Execute the workflow. Do not stop after printing the TODO checklist.
- If delegating, require the delegate to run commands and capture outputs.
## Known Footguns
- If you see "fatal: not a git repository", you are in the wrong directory. Use `~/dev/openclaw` if available; otherwise ask user.
- Do not run `git clean -fdx`.
- Do not run `git add -A` or `git add .`.
## Completion Criteria
- Rebase PR commits onto `origin/main`.
- Fix all BLOCKER and IMPORTANT items from `.local/review.md`.
- Run gates and pass.
- Commit prep changes.
- Push the updated HEAD back to the PR head branch.
- Write `.local/prep.md` with a prep summary.
- Output exactly: `PR is ready for /mergepr`.
## First: Create a TODO Checklist
Create a checklist of all prep steps, print it, then continue and execute the commands.
## Setup: Use a Worktree
Use an isolated worktree for all prep work.
```sh
scripts/pr-prepare init <PR>
cd ~/openclaw
# Sanity: confirm you are in the repo
git rev-parse --show-toplevel
WORKTREE_DIR=".worktrees/pr-<PR>"
```
2. Resolve findings from structured review:
Run all commands inside the worktree directory.
- `.local/review.json` is mandatory.
- Resolve all `BLOCKER` and `IMPORTANT` items.
3. Commit scoped changes with concise subjects (no PR number/thanks; those belong on the final merge/squash commit).
4. Run gates via wrapper.
5. Push via wrapper (includes pre-push remote verification, one automatic lease-retry path, and post-push API propagation retry).
Optional one-shot path:
## Load Review Findings (Mandatory)
```sh
scripts/pr-prepare run <PR>
if [ -f .local/review.md ]; then
echo "Found review findings from /reviewpr"
else
echo "Missing .local/review.md. Run /reviewpr first and save findings."
exit 1
fi
# Read it
sed -n '1,200p' .local/review.md
```
## Steps
1. Setup and artifacts
1. Identify PR meta (author, head branch, head repo URL)
```sh
scripts/pr-prepare init <PR>
ls -la .local/review.md .local/review.json .local/pr-meta.env .local/prep-context.env
jq . .local/review.json >/dev/null
gh pr view <PR> --json number,title,author,headRefName,baseRefName,headRepository,body --jq '{number,title,author:.author.login,head:.headRefName,base:.baseRefName,headRepo:.headRepository.nameWithOwner,body}'
contrib=$(gh pr view <PR> --json author --jq .author.login)
head=$(gh pr view <PR> --json headRefName --jq .headRefName)
head_repo_url=$(gh pr view <PR> --json headRepository --jq .headRepository.url)
```
2. Resolve required findings
List required items:
2. Fetch the PR branch tip into a local ref
```sh
jq -r '.findings[] | select(.severity=="BLOCKER" or .severity=="IMPORTANT") | "- [\(.severity)] \(.id): \(.title) => \(.fix)"' .local/review.json
git fetch origin pull/<PR>/head:pr-<PR>
```
Fix all required findings. Keep scope tight.
3. Update changelog/docs (changelog is mandatory in this workflow)
3. Rebase PR commits onto latest main
```sh
jq -r '.changelog' .local/review.json
jq -r '.docs' .local/review.json
# Move worktree to the PR tip first
git reset --hard pr-<PR>
# Rebase onto current main
git fetch origin main
git rebase origin/main
```
4. Commit scoped changes
If conflicts happen:
Use concise, action-oriented subject lines without PR numbers/thanks. The final merge/squash commit is the only place we include PR numbers and contributor thanks.
- Resolve each conflicted file.
- Run `git add <resolved_file>` for each file.
- Run `git rebase --continue`.
Use explicit file list:
If the rebase gets confusing or you resolve conflicts 3 or more times, stop and report.
4. Fix issues from `.local/review.md`
- Fix all BLOCKER and IMPORTANT items.
- NITs are optional.
- Keep scope tight.
Keep a running log in `.local/prep.md`:
- List which review items you fixed.
- List which files you touched.
- Note behavior changes.
5. Update `CHANGELOG.md` if flagged in review
Check `.local/review.md` section H for guidance.
If flagged and user-facing:
- Check if `CHANGELOG.md` exists.
```sh
scripts/committer "fix: <summary>" <file1> <file2> ...
ls CHANGELOG.md 2>/dev/null
```
5. Run gates
- Follow existing format.
- Add a concise entry with PR number and contributor.
6. Update docs if flagged in review
Check `.local/review.md` section G for guidance.
If flagged, update only docs related to the PR changes.
7. Commit prep fixes
Stage only specific files:
```sh
scripts/pr-prepare gates <PR>
git add <file1> <file2> ...
```
6. Push safely to PR head
Preferred commit tool:
```sh
scripts/pr-prepare push <PR>
committer "fix: <summary> (#<PR>) (thanks @$contrib)" <changed files>
```
This push step includes:
- robust fork remote resolution from owner/name,
- pre-push remote SHA verification,
- one automatic rebase + gate rerun + retry if lease push fails,
- post-push PR-head propagation retry,
- idempotent behavior when local prep HEAD is already on the PR head,
- post-push SHA verification and `.local/prep.env` generation.
7. Verify handoff artifacts
If `committer` is not found:
```sh
ls -la .local/prep.md .local/prep.env
git commit -m "fix: <summary> (#<PR>) (thanks @$contrib)"
```
8. Output
8. Run full gates before pushing
- Summarize resolved findings and gate results.
- Print exactly: `PR is ready for /merge-pr`.
```sh
pnpm install
pnpm build
pnpm ui:build
pnpm check
pnpm test
```
Require all to pass. If something fails, fix, commit, and rerun. Allow at most 3 fix and rerun cycles. If gates still fail after 3 attempts, stop and report the failures. Do not loop indefinitely.
9. Push updates back to the PR head branch
```sh
# Ensure remote for PR head exists
git remote add prhead "$head_repo_url.git" 2>/dev/null || git remote set-url prhead "$head_repo_url.git"
# Use force with lease after rebase
# Double check: $head must NOT be "main" or "master"
echo "Pushing to branch: $head"
if [ "$head" = "main" ] || [ "$head" = "master" ]; then
echo "ERROR: head branch is main/master. This is wrong. Stopping."
exit 1
fi
git push --force-with-lease prhead HEAD:$head
```
Before running the command above, pause and ask for explicit maintainer go-ahead to perform the push.
10. Verify PR is not behind main (Mandatory)
```sh
git fetch origin main
git fetch origin pull/<PR>/head:pr-<PR>-verify --force
git merge-base --is-ancestor origin/main pr-<PR>-verify && echo "PR is up to date with main" || echo "ERROR: PR is still behind main, rebase again"
git branch -D pr-<PR>-verify 2>/dev/null || true
```
If still behind main, repeat steps 2 through 9.
11. Write prep summary artifacts (Mandatory)
Update `.local/prep.md` with:
- Current HEAD sha from `git rev-parse HEAD`.
- Short bullet list of changes.
- Gate results.
- Push confirmation.
- Rebase verification result.
Create or overwrite `.local/prep.md` and verify it exists and is non-empty:
```sh
git rev-parse HEAD
ls -la .local/prep.md
wc -l .local/prep.md
```
12. Output
Include a diff stat summary:
```sh
git diff --stat origin/main..HEAD
git diff --shortstat origin/main..HEAD
```
Report totals: X files changed, Y insertions(+), Z deletions(-).
If gates passed and push succeeded, print exactly:
```
PR is ready for /mergepr
```
Otherwise, list remaining failures and stop.
## Guardrails
- Do not run `gh pr merge` in this skill.
- Do not delete worktree.
- Worktree only.
- Do not delete the worktree on success. `/mergepr` may reuse it.
- Do not run `gh pr merge`.
- Never push to main. Only push to the PR head branch.
- Run and pass all gates before pushing.

View File

@@ -1,142 +1,229 @@
---
name: review-pr
description: Script-first review-only GitHub pull request analysis. Use for deterministic PR review with structured findings handoff to /prepare-pr.
description: Review-only GitHub pull request analysis with the gh CLI. Use when asked to review a PR, provide structured feedback, or assess readiness to land. Do not merge, push, or make code changes you intend to keep.
---
# Review PR
## Overview
Perform a read-only review and produce both human and machine-readable outputs.
Perform a thorough review-only PR assessment and return a structured recommendation on readiness for /preparepr.
## Inputs
- Ask for PR number or URL.
- If missing, always ask.
- If missing, always ask. Never auto-detect from conversation.
- If ambiguous, ask.
## Safety
- Never push, merge, or modify code intended to keep.
- Work only in `.worktrees/pr-<PR>`.
- Wrapper commands are cwd-agnostic; you can run them from repo root or inside the PR worktree.
- Never push to `main` or `origin/main`, not during review, not ever.
- Do not run `git push` at all during review. Treat review as read only.
- Do not stop or kill the gateway. Do not run gateway stop commands. Do not kill processes on port 18792.
- Do not perform any GitHub write action (comments, assignees, labels, state changes) unless maintainer explicitly approves it.
## Execution Contract
## Execution Rule
1. Run wrapper setup:
- Execute the workflow. Do not stop after printing the TODO checklist.
- If delegating, require the delegate to run commands and capture outputs, not a plan.
## Known Failure Modes
- If you see "fatal: not a git repository", you are in the wrong directory. Use `~/dev/openclaw` if available; otherwise ask user.
- Do not stop after printing the checklist. That is not completion.
## Writing Style for Output
- Write casual and direct.
- Avoid em dashes and en dashes. Use commas or separate sentences.
## Completion Criteria
- Run the commands in the worktree and inspect the PR directly.
- Produce the structured review sections A through J.
- Save the full review to `.local/review.md` inside the worktree.
## First: Create a TODO Checklist
Create a checklist of all review steps, print it, then continue and execute the commands.
## Setup: Use a Worktree
Use an isolated worktree for all review work.
```sh
scripts/pr-review <PR>
cd ~/dev/openclaw
# Sanity: confirm you are in the repo
git rev-parse --show-toplevel
WORKTREE_DIR=".worktrees/pr-<PR>"
git fetch origin main
# Reuse existing worktree if it exists, otherwise create new
if [ -d "$WORKTREE_DIR" ]; then
cd "$WORKTREE_DIR"
git checkout temp/pr-<PR> 2>/dev/null || git checkout -b temp/pr-<PR>
git fetch origin main
git reset --hard origin/main
else
git worktree add "$WORKTREE_DIR" -b temp/pr-<PR> origin/main
cd "$WORKTREE_DIR"
fi
# Create local scratch space that persists across /reviewpr to /preparepr to /mergepr
mkdir -p .local
```
2. Use explicit branch mode switches:
- Main baseline mode: `scripts/pr review-checkout-main <PR>`
- PR-head mode: `scripts/pr review-checkout-pr <PR>`
3. Before writing review outputs, run branch guard:
```sh
scripts/pr review-guard <PR>
```
4. Write both outputs:
- `.local/review.md` with sections A through J.
- `.local/review.json` with structured findings.
5. Validate artifacts semantically:
```sh
scripts/pr review-validate-artifacts <PR>
```
Run all commands inside the worktree directory.
Start on `origin/main` so you can check for existing implementations before looking at PR code.
## Steps
1. Setup and metadata
1. Identify PR meta and context
```sh
scripts/pr-review <PR>
ls -la .local/pr-meta.json .local/pr-meta.env .local/review-context.env .local/review-mode.env
gh pr view <PR> --json number,title,state,isDraft,author,baseRefName,headRefName,headRepository,url,body,labels,assignees,reviewRequests,files,additions,deletions --jq '{number,title,url,state,isDraft,author:.author.login,base:.baseRefName,head:.headRefName,headRepo:.headRepository.nameWithOwner,additions,deletions,files:.files|length,body}'
```
2. Existing implementation check on main
2. Check if this already exists in main before looking at the PR branch
- Identify the core feature or fix from the PR title and description.
- Search for existing implementations using keywords from the PR title, changed file paths, and function or component names from the diff.
```sh
scripts/pr review-checkout-main <PR>
rg -n "<keyword>" -S src extensions apps || true
git log --oneline --all --grep "<keyword>" | head -20
# Use keywords from the PR title and changed files
rg -n "<keyword_from_pr_title>" -S src packages apps ui || true
rg -n "<function_or_component_name>" -S src packages apps ui || true
git log --oneline --all --grep="<keyword_from_pr_title>" | head -20
```
3. Claim PR
If it already exists, call it out as a BLOCKER or at least IMPORTANT.
3. Optional claim step, only with explicit approval
If the maintainer asks to claim the PR, assign yourself. Otherwise skip this.
```sh
gh_user=$(gh api user --jq .login)
gh pr edit <PR> --add-assignee "$gh_user" || echo "Could not assign reviewer, continuing"
gh pr edit <PR> --add-assignee "$gh_user"
```
4. Read PR description and diff
4. Read the PR description carefully
Use the body from step 1. Summarize goal, scope, and missing context.
5. Read the diff thoroughly
Minimum:
```sh
scripts/pr review-checkout-pr <PR>
gh pr diff <PR>
source .local/review-context.env
git diff --stat "$MERGE_BASE"..pr-<PR>
git diff "$MERGE_BASE"..pr-<PR>
```
5. Optional local tests
Use the wrapper for target validation and executed-test verification:
If you need full code context locally, fetch the PR head to a local ref and diff it. Do not create a merge commit.
```sh
scripts/pr review-tests <PR> <test-file> [<test-file> ...]
git fetch origin pull/<PR>/head:pr-<PR>
# Show changes without modifying the working tree
git diff --stat origin/main..pr-<PR>
git diff origin/main..pr-<PR>
```
6. Initialize review artifact templates
If you want to browse the PR version of files directly, temporarily check out `pr-<PR>` in the worktree. Do not commit or push. Return to `temp/pr-<PR>` and reset to `origin/main` afterward.
```sh
scripts/pr review-artifacts-init <PR>
# Use only if needed
# git checkout pr-<PR>
# ...inspect files...
git checkout temp/pr-<PR>
git reset --hard origin/main
```
7. Produce review outputs
6. Validate the change is needed and valuable
- Fill `.local/review.md` sections A through J.
- Fill `.local/review.json`.
Be honest. Call out low value AI slop.
Minimum JSON shape:
7. Evaluate implementation quality
```json
{
"recommendation": "READY FOR /prepare-pr",
"findings": [
{
"id": "F1",
"severity": "IMPORTANT",
"title": "...",
"area": "path/or/component",
"fix": "Actionable fix"
}
],
"tests": {
"ran": [],
"gaps": [],
"result": "pass"
},
"docs": "up_to_date|missing|not_applicable",
"changelog": "required"
}
```
Review correctness, design, performance, and ergonomics.
8. Guard + validate before final output
8. Perform a security review
Assume OpenClaw subagents run with full disk access, including git, gh, and shell. Check auth, input validation, secrets, dependencies, tool safety, and privacy.
9. Review tests and verification
Identify what exists, what is missing, and what would be a minimal regression test.
10. Check docs
Check if the PR touches code with related documentation such as README, docs, inline API docs, or config examples.
- If docs exist for the changed area and the PR does not update them, flag as IMPORTANT.
- If the PR adds a new feature or config option with no docs, flag as IMPORTANT.
- If the change is purely internal with no user-facing impact, skip this.
11. Check changelog
Check if `CHANGELOG.md` exists and whether the PR warrants an entry.
- If the project has a changelog and the PR is user-facing, flag missing entry as IMPORTANT.
- Leave the change for /preparepr, only flag it here.
12. Answer the key question
Decide if /preparepr can fix issues or the contributor must update the PR.
13. Save findings to the worktree
Write the full structured review sections A through J to `.local/review.md`.
Create or overwrite the file and verify it exists and is non-empty.
```sh
scripts/pr review-guard <PR>
scripts/pr review-validate-artifacts <PR>
ls -la .local/review.md
wc -l .local/review.md
```
14. Output the structured review
Produce a review that matches what you saved to `.local/review.md`.
A) TL;DR recommendation
- One of: READY FOR /preparepr | NEEDS WORK | NEEDS DISCUSSION | NOT USEFUL (CLOSE)
- 1 to 3 sentences.
B) What changed
C) What is good
D) Security findings
E) Concerns or questions (actionable)
- Numbered list.
- Mark each item as BLOCKER, IMPORTANT, or NIT.
- For each, point to file or area and propose a concrete fix.
F) Tests
G) Docs status
- State if related docs are up to date, missing, or not applicable.
H) Changelog
- State if `CHANGELOG.md` needs an entry and which category.
I) Follow ups (optional)
J) Suggested PR comment (optional)
## Guardrails
- Keep review read-only.
- Do not delete worktree.
- Use merge-base scoped diff for local context to avoid stale branch drift.
- Worktree only.
- Do not delete the worktree after review.
- Review only, do not merge, do not push.

2
.gitignore vendored
View File

@@ -82,5 +82,5 @@ USER.md
/memory/
.agent/*.json
!.agent/workflows/
local/
/local/
package-lock.json

View File

@@ -13,6 +13,7 @@ Docs: https://docs.openclaw.ai
- Skills: remove duplicate `local-places` Google Places skill/proxy and keep `goplaces` as the single supported Google Places path.
- Agents: add pre-prompt context diagnostics (`messages`, `systemPromptChars`, `promptChars`, provider/model, session file) before embedded runner prompt calls to improve overflow debugging. (#8930) Thanks @Glucksberg.
- Onboarding/Providers: add first-class Hugging Face Inference provider support (provider wiring, onboarding auth choice/API key flow, and default-model selection), and preserve Hugging Face auth intent in auth-choice remapping (`tokenProvider=huggingface` with `authChoice=apiKey`) while skipping env-override prompts when an explicit token is provided. (#13472) Thanks @Josephrp.
- Onboarding/Providers: add `minimax-api-key-cn` auth choice for the MiniMax China API endpoint. (#15191) Thanks @liuy.
### Breaking
@@ -21,10 +22,13 @@ Docs: https://docs.openclaw.ai
### Fixes
- Gateway/Auth: add trusted-proxy mode hardening follow-ups by keeping `OPENCLAW_GATEWAY_*` env compatibility, auto-normalizing invalid setup combinations in interactive `gateway configure` (trusted-proxy forces `bind=lan` and disables Tailscale serve/funnel), and suppressing shared-secret/rate-limit audit findings that do not apply to trusted-proxy deployments. (#15940) Thanks @nickytonline.
- Docs/Hooks: update hooks documentation URLs to the new `/automation/hooks` location. (#16165) Thanks @nicholascyh.
- Security/Audit: warn when `gateway.tools.allow` re-enables default-denied tools over HTTP `POST /tools/invoke`, since this can increase RCE blast radius if the gateway is reachable.
- Feishu: stop persistent Typing reaction on NO_REPLY/suppressed runs by wiring reply-dispatcher cleanup to remove typing indicators. (#15464) Thanks @arosstale.
- BlueBubbles: gracefully degrade when Private API is disabled by filtering private-only actions, skipping private-only reactions/reply effects, and avoiding private reply markers so non-private flows remain usable. (#16002) Thanks @L-U-C-K-Y.
- Outbound: add a write-ahead delivery queue with crash-recovery retries to prevent lost outbound messages after gateway restarts. (#15636) Thanks @nabbilkhan, @thewilloftheshadow.
- Auto-reply/Threading: auto-inject implicit reply threading so `replyToMode` works without requiring model-emitted `[[reply_to_current]]`, while preserving `replyToMode: "off"` behavior for implicit Slack replies and keeping block-streaming chunk coalescing stable under `replyToMode: "first"`. (#14976) Thanks @Diaspar4u.
- Auto-reply/Threading: honor explicit `[[reply_to_*]]` tags even when `replyToMode` is `off`. (#16174) Thanks @aldoeliacim.
- Outbound/Threading: pass `replyTo` and `threadId` from `message send` tool actions through the core outbound send path to channel adapters, preserving thread/reply routing. (#14948) Thanks @mcaxtr.
- Auto-reply/Media: allow image-only inbound messages (no caption) to reach the agent instead of short-circuiting as empty text, and preserve thread context in queued/followup prompt bodies for media-only runs. (#11916) Thanks @arosstale.
- Discord: route autoThread replies to existing threads instead of the root channel. (#8302) Thanks @gavinbmoore, @thewilloftheshadow.
@@ -62,9 +66,12 @@ Docs: https://docs.openclaw.ai
- Sessions/Agents: pass `agentId` through status and usage transcript-resolution paths (auto-reply, gateway usage APIs, and session cost/log loaders) so non-default agents can resolve absolute session files without path-validation failures. (#15103) Thanks @jalehman.
- Sessions: archive previous transcript files on `/new` and `/reset` session resets (including gateway `sessions.reset`) so stale transcripts do not accumulate on disk. (#14869) Thanks @mcaxtr.
- Status/Sessions: stop clamping derived `totalTokens` to context-window size, keep prompt-token snapshots wired through session accounting, and surface context usage as unknown when fresh snapshot data is missing to avoid false 100% reports. (#15114) Thanks @echoVic.
- Gateway/Routing: speed up hot paths for session listing (derived titles + previews), WS broadcast, and binding resolution.
- CLI/Completion: route plugin-load logs to stderr and write generated completion scripts directly to stdout to avoid `source <(openclaw completion ...)` corruption. (#15481) Thanks @arosstale.
- CLI: lazily load outbound provider dependencies and remove forced success-path exits so commands terminate naturally without killing intentional long-running foreground actions. (#12906) Thanks @DrCrinkle.
- CLI: speed up startup by lazily registering core commands (keeps rich `--help` while reducing cold-start overhead).
- Security/Gateway + ACP: block high-risk tools (`sessions_spawn`, `sessions_send`, `gateway`, `whatsapp_login`) from HTTP `/tools/invoke` by default with `gateway.tools.{allow,deny}` overrides, and harden ACP permission selection to fail closed when tool identity/options are ambiguous while supporting `allow_always`/`reject_always`. (#15390) Thanks @aether-ai-agent.
- Security/ACP: prompt for non-read/search permission requests in ACP clients (reduces silent tool approval risk). Thanks @aether-ai-agent.
- Security/Gateway: breaking default-behavior change - canvas IP-based auth fallback now only accepts machine-scoped addresses (RFC1918, link-local, ULA IPv6, CGNAT); public-source IP matches now require bearer token auth. (#14661) Thanks @sumleo.
- Security/Link understanding: block loopback/internal host patterns and private/mapped IPv6 addresses in extracted URL handling to close SSRF bypasses in link CLI flows. (#15604) Thanks @AI-Reviewer-QS.
- Security/Browser: constrain `POST /trace/stop`, `POST /wait/download`, and `POST /download` output paths to OpenClaw temp roots and reject traversal/escape paths.
@@ -76,6 +83,7 @@ Docs: https://docs.openclaw.ai
- Security/Audit: add misconfiguration checks for sandbox Docker config with sandbox mode off, ineffective `gateway.nodes.denyCommands` entries, global minimal tool-profile overrides by agent profiles, and permissive extension-plugin tool reachability.
- Security/Audit: distinguish external webhooks (`hooks.enabled`) from internal hooks (`hooks.internal.enabled`) in attack-surface summaries to avoid false exposure signals when only internal hooks are enabled. (#13474) Thanks @mcaxtr.
- Security/Onboarding: clarify multi-user DM isolation remediation with explicit `openclaw config set session.dmScope ...` commands in security audit, doctor security, and channel onboarding guidance. (#13129) Thanks @VintLin.
- Security/Gateway: bind node `system.run` approval overrides to gateway exec-approval records (runId-bound), preventing approval-bypass via `node.invoke` param injection. Thanks @222n5.
- Agents/Nodes: harden node exec approval decision handling in the `nodes` tool run path by failing closed on unexpected approval decisions, and add regression coverage for approval-required retry/deny/timeout flows. (#4726) Thanks @rmorse.
- Android/Nodes: harden `app.update` by requiring HTTPS and gateway-host URL matching plus SHA-256 verification, stream URL camera downloads to disk with size guards to avoid memory spikes, and stop signing release builds with debug keys. (#13541) Thanks @smartprogrammer93.
- Routing: enforce strict binding-scope matching across peer/guild/team/roles so peer-scoped Discord/Slack bindings no longer match unrelated guild/team contexts or fallback tiers. (#15274) Thanks @lailoo.

View File

@@ -128,7 +128,7 @@ The `HOOK.md` file contains metadata in YAML frontmatter plus Markdown documenta
---
name: my-hook
description: "Short description of what this hook does"
homepage: https://docs.openclaw.ai/hooks#my-hook
homepage: https://docs.openclaw.ai/automation/hooks#my-hook
metadata:
{ "openclaw": { "emoji": "🔗", "events": ["command:new"], "requires": { "bins": ["node"] } } }
---

View File

@@ -273,6 +273,8 @@ See [Slash commands](/tools/slash-commands) for command catalog and behavior.
- `first`
- `all`
Note: `off` disables implicit reply threading. Explicit `[[reply_to_*]]` tags are still honored.
Message IDs are surfaced in context/history so agents can target specific messages.
</Accordion>

View File

@@ -233,6 +233,8 @@ Manual reply tags are supported:
- `[[reply_to_current]]`
- `[[reply_to:<id>]]`
Note: `replyToMode="off"` disables implicit reply threading. Explicit `[[reply_to_*]]` tags are still honored.
## Media, chunking, and delivery
<AccordionGroup>

View File

@@ -416,6 +416,8 @@ curl "https://api.telegram.org/bot<bot_token>/getUpdates"
- `first`
- `all`
Note: `off` disables implicit reply threading. Explicit `[[reply_to_*]]` tags are still honored.
</Accordion>
<Accordion title="Forum topics and thread behavior">

View File

@@ -90,7 +90,7 @@ Details:
Source: openclaw-bundled
Path: /path/to/openclaw/hooks/bundled/session-memory/HOOK.md
Handler: /path/to/openclaw/hooks/bundled/session-memory/handler.ts
Homepage: https://docs.openclaw.ai/hooks#session-memory
Homepage: https://docs.openclaw.ai/automation/hooks#session-memory
Events: command:new
Requirements:

View File

@@ -786,6 +786,10 @@
{
"source": "/platforms/northflank",
"destination": "/install/northflank"
},
{
"source": "/gateway/trusted-proxy",
"destination": "/gateway/trusted-proxy-auth"
}
],
"navigation": {
@@ -1106,6 +1110,7 @@
"gateway/configuration-reference",
"gateway/configuration-examples",
"gateway/authentication",
"gateway/trusted-proxy-auth",
"gateway/health",
"gateway/heartbeat",
"gateway/doctor",

View File

@@ -1889,9 +1889,10 @@ See [Plugins](/tools/plugin).
port: 18789,
bind: "loopback",
auth: {
mode: "token", // token | password
mode: "token", // token | password | trusted-proxy
token: "your-token",
// password: "your-password", // or OPENCLAW_GATEWAY_PASSWORD
// trustedProxy: { userHeader: "x-forwarded-user" }, // for mode=trusted-proxy; see /gateway/trusted-proxy-auth
allowTailscale: true,
rateLimit: {
maxAttempts: 10,
@@ -1934,6 +1935,7 @@ See [Plugins](/tools/plugin).
- `port`: single multiplexed port for WS + HTTP. Precedence: `--port` > `OPENCLAW_GATEWAY_PORT` > `gateway.port` > `18789`.
- `bind`: `auto`, `loopback` (default), `lan` (`0.0.0.0`), `tailnet` (Tailscale IP only), or `custom`.
- **Auth**: required by default. Non-loopback binds require a shared token/password. Onboarding wizard generates a token by default.
- `auth.mode: "trusted-proxy"`: delegate auth to an identity-aware reverse proxy and trust identity headers from `gateway.trustedProxies` (see [Trusted Proxy Auth](/gateway/trusted-proxy-auth)).
- `auth.allowTailscale`: when `true`, Tailscale Serve identity headers satisfy auth (verified via `tailscale whois`). Defaults to `true` when `tailscale.mode = "serve"`.
- `auth.rateLimit`: optional failed-auth limiter. Applies per client IP and per auth scope (shared-secret and device-token are tracked independently). Blocked attempts return `429` + `Retry-After`.
- `auth.rateLimit.exemptLoopback` defaults to `true`; set `false` when you intentionally want localhost traffic rate-limited too (for test setups or strict proxy deployments).

View File

@@ -439,6 +439,7 @@ Auth modes:
- `gateway.auth.mode: "token"`: shared bearer token (recommended for most setups).
- `gateway.auth.mode: "password"`: password auth (prefer setting via env: `OPENCLAW_GATEWAY_PASSWORD`).
- `gateway.auth.mode: "trusted-proxy"`: trust an identity-aware reverse proxy to authenticate users and pass identity via headers (see [Trusted Proxy Auth](/gateway/trusted-proxy-auth)).
Rotation checklist (token/password):
@@ -459,7 +460,7 @@ injected by Tailscale.
**Security rule:** do not forward these headers from your own reverse proxy. If
you terminate TLS or proxy in front of the gateway, disable
`gateway.auth.allowTailscale` and use token/password auth instead.
`gateway.auth.allowTailscale` and use token/password auth (or [Trusted Proxy Auth](/gateway/trusted-proxy-auth)) instead.
Trusted proxies:

View File

@@ -44,6 +44,7 @@ Channel options:
Related global options:
- `gateway.port`, `gateway.bind`: WebSocket host/port.
- `gateway.auth.mode`, `gateway.auth.token`, `gateway.auth.password`: WebSocket auth.
- `gateway.auth.mode`, `gateway.auth.token`, `gateway.auth.password`: WebSocket auth (token/password).
- `gateway.auth.mode: "trusted-proxy"`: reverse-proxy auth for browser clients (see [Trusted Proxy Auth](/gateway/trusted-proxy-auth)).
- `gateway.remote.url`, `gateway.remote.token`, `gateway.remote.password`: remote gateway target.
- `session.*`: session storage and main key defaults.

View File

@@ -133,7 +133,7 @@ Hook 包可以附带依赖;它们将安装在 `~/.openclaw/hooks/<id>` 下。
---
name: my-hook
description: "Short description of what this hook does"
homepage: https://docs.openclaw.ai/hooks#my-hook
homepage: https://docs.openclaw.ai/automation/hooks#my-hook
metadata:
{ "openclaw": { "emoji": "🔗", "events": ["command:new"], "requires": { "bins": ["node"] } } }
---

View File

@@ -96,7 +96,7 @@ Details:
Source: openclaw-bundled
Path: /path/to/openclaw/hooks/bundled/session-memory/HOOK.md
Handler: /path/to/openclaw/hooks/bundled/session-memory/handler.ts
Homepage: https://docs.openclaw.ai/hooks#session-memory
Homepage: https://docs.openclaw.ai/automation/hooks#session-memory
Events: command:new
Requirements:

View File

@@ -48,6 +48,55 @@ describe("resolvePermissionRequest", () => {
expect(res).toEqual({ outcome: { outcome: "selected", optionId: "allow" } });
});
it("prompts for non-read/search tools (write)", async () => {
const prompt = vi.fn(async () => true);
const res = await resolvePermissionRequest(
makePermissionRequest({
toolCall: { toolCallId: "tool-w", title: "write: /tmp/pwn", status: "pending" },
}),
{ prompt, log: () => {} },
);
expect(prompt).toHaveBeenCalledTimes(1);
expect(prompt).toHaveBeenCalledWith("write", "write: /tmp/pwn");
expect(res).toEqual({ outcome: { outcome: "selected", optionId: "allow" } });
});
it("auto-approves search without prompting", async () => {
const prompt = vi.fn(async () => true);
const res = await resolvePermissionRequest(
makePermissionRequest({
toolCall: { toolCallId: "tool-s", title: "search: foo", status: "pending" },
}),
{ prompt, log: () => {} },
);
expect(res).toEqual({ outcome: { outcome: "selected", optionId: "allow" } });
expect(prompt).not.toHaveBeenCalled();
});
it("prompts for fetch even when tool name is known", async () => {
const prompt = vi.fn(async () => false);
const res = await resolvePermissionRequest(
makePermissionRequest({
toolCall: { toolCallId: "tool-f", title: "fetch: https://example.com", status: "pending" },
}),
{ prompt, log: () => {} },
);
expect(prompt).toHaveBeenCalledTimes(1);
expect(res).toEqual({ outcome: { outcome: "selected", optionId: "reject" } });
});
it("prompts when tool name contains read/search substrings but isn't a safe kind", async () => {
const prompt = vi.fn(async () => false);
const res = await resolvePermissionRequest(
makePermissionRequest({
toolCall: { toolCallId: "tool-t", title: "thread: reply", status: "pending" },
}),
{ prompt, log: () => {} },
);
expect(prompt).toHaveBeenCalledTimes(1);
expect(res).toEqual({ outcome: { outcome: "selected", optionId: "reject" } });
});
it("uses allow_always and reject_always when once options are absent", async () => {
const options: RequestPermissionRequest["options"] = [
{ kind: "allow_always", name: "Always allow", optionId: "allow-always" },

View File

@@ -10,24 +10,9 @@ import { spawn, type ChildProcess } from "node:child_process";
import * as readline from "node:readline";
import { Readable, Writable } from "node:stream";
import { ensureOpenClawCliOnPath } from "../infra/path-env.js";
import { DANGEROUS_ACP_TOOLS } from "../security/dangerous-tools.js";
/**
* Tools that require explicit user approval in ACP sessions.
* These tools can execute arbitrary code, modify the filesystem,
* or access sensitive resources.
*/
const DANGEROUS_ACP_TOOLS = new Set([
"exec",
"spawn",
"shell",
"sessions_spawn",
"sessions_send",
"gateway",
"fs_write",
"fs_delete",
"fs_move",
"apply_patch",
]);
const SAFE_AUTO_APPROVE_KINDS = new Set(["read", "search"]);
type PermissionOption = RequestPermissionRequest["options"][number];
@@ -77,6 +62,54 @@ function parseToolNameFromTitle(title: string | undefined | null): string | unde
return normalizeToolName(head);
}
function resolveToolKindForPermission(
params: RequestPermissionRequest,
toolName: string | undefined,
): string | undefined {
const toolCall = params.toolCall as unknown as { kind?: unknown; title?: unknown } | undefined;
const kindRaw = typeof toolCall?.kind === "string" ? toolCall.kind.trim().toLowerCase() : "";
if (kindRaw) {
return kindRaw;
}
const name =
toolName ??
parseToolNameFromTitle(typeof toolCall?.title === "string" ? toolCall.title : undefined);
if (!name) {
return undefined;
}
const normalized = name.toLowerCase();
const hasToken = (token: string) => {
// Tool names tend to be snake_case. Avoid substring heuristics (ex: "thread" contains "read").
const re = new RegExp(`(?:^|[._-])${token}(?:$|[._-])`);
return re.test(normalized);
};
// Prefer a conservative classifier: only classify safe kinds when confident.
if (normalized === "read" || hasToken("read")) {
return "read";
}
if (normalized === "search" || hasToken("search") || hasToken("find")) {
return "search";
}
if (normalized.includes("fetch") || normalized.includes("http")) {
return "fetch";
}
if (normalized.includes("write") || normalized.includes("edit") || normalized.includes("patch")) {
return "edit";
}
if (normalized.includes("delete") || normalized.includes("remove")) {
return "delete";
}
if (normalized.includes("move") || normalized.includes("rename")) {
return "move";
}
if (normalized.includes("exec") || normalized.includes("run") || normalized.includes("bash")) {
return "execute";
}
return "other";
}
function resolveToolNameForPermission(params: RequestPermissionRequest): string | undefined {
const toolCall = params.toolCall;
const toolMeta = asRecord(toolCall?._meta);
@@ -158,6 +191,7 @@ export async function resolvePermissionRequest(
const options = params.options ?? [];
const toolTitle = params.toolCall?.title ?? "tool";
const toolName = resolveToolNameForPermission(params);
const toolKind = resolveToolKindForPermission(params, toolName);
if (options.length === 0) {
log(`[permission cancelled] ${toolName ?? "unknown"}: no options available`);
@@ -166,7 +200,8 @@ export async function resolvePermissionRequest(
const allowOption = pickOption(options, ["allow_once", "allow_always"]);
const rejectOption = pickOption(options, ["reject_once", "reject_always"]);
const promptRequired = !toolName || DANGEROUS_ACP_TOOLS.has(toolName);
const isSafeKind = Boolean(toolKind && SAFE_AUTO_APPROVE_KINDS.has(toolKind));
const promptRequired = !toolName || !isSafeKind || DANGEROUS_ACP_TOOLS.has(toolName);
if (!promptRequired) {
const option = allowOption ?? options[0];
@@ -174,11 +209,13 @@ export async function resolvePermissionRequest(
log(`[permission cancelled] ${toolName}: no selectable options`);
return cancelledPermission();
}
log(`[permission auto-approved] ${toolName}`);
log(`[permission auto-approved] ${toolName} (${toolKind ?? "unknown"})`);
return selectedPermission(option.optionId);
}
log(`\n[permission requested] ${toolTitle}${toolName ? ` (${toolName})` : ""}`);
log(
`\n[permission requested] ${toolTitle}${toolName ? ` (${toolName})` : ""}${toolKind ? ` [${toolKind}]` : ""}`,
);
const approved = await prompt(toolName, toolTitle);
if (approved && allowOption) {

View File

@@ -135,6 +135,7 @@ describe("nodes run", () => {
it("requests approval and retries with allow-once decision", async () => {
let invokeCalls = 0;
let approvalId: string | null = null;
callGateway.mockImplementation(async ({ method, params }) => {
if (method === "node.list") {
return { nodes: [{ nodeId: "mac-1", commands: ["system.run"] }] };
@@ -149,6 +150,7 @@ describe("nodes run", () => {
command: "system.run",
params: {
command: ["echo", "hi"],
runId: approvalId,
approved: true,
approvalDecision: "allow-once",
},
@@ -157,10 +159,15 @@ describe("nodes run", () => {
}
if (method === "exec.approval.request") {
expect(params).toMatchObject({
id: expect.any(String),
command: "echo hi",
host: "node",
timeoutMs: 120_000,
});
approvalId =
typeof (params as { id?: unknown } | undefined)?.id === "string"
? ((params as { id: string }).id ?? null)
: null;
return { decision: "allow-once" };
}
throw new Error(`unexpected method: ${String(method)}`);

View File

@@ -467,10 +467,12 @@ export function createNodesTool(options?: {
// the gateway and wait for the user to approve/deny via the UI.
const APPROVAL_TIMEOUT_MS = 120_000;
const cmdText = command.join(" ");
const approvalId = crypto.randomUUID();
const approvalResult = await callGatewayTool(
"exec.approval.request",
{ ...gatewayOpts, timeoutMs: APPROVAL_TIMEOUT_MS + 5_000 },
{
id: approvalId,
command: cmdText,
cwd,
host: "node",
@@ -502,6 +504,7 @@ export function createNodesTool(options?: {
command: "system.run",
params: {
...runParams,
runId: approvalId,
approved: true,
approvalDecision,
},

View File

@@ -72,4 +72,17 @@ describe("applyReplyThreading auto-threading", () => {
expect(result[0].replyToId).toBe("42");
expect(result[0].replyToTag).toBe(true);
});
it("keeps explicit tags for Telegram when off mode is enabled", () => {
const result = applyReplyThreading({
payloads: [{ text: "[[reply_to_current]]A" }],
replyToMode: "off",
replyToChannel: "telegram",
currentMessageId: "42",
});
expect(result).toHaveLength(1);
expect(result[0].replyToId).toBe("42");
expect(result[0].replyToTag).toBe(true);
});
});

View File

@@ -54,9 +54,11 @@ export function createReplyToModeFilterForChannel(
channel?: OriginatingChannelType,
) {
const provider = normalizeChannelId(channel);
// Always honour explicit [[reply_to_*]] tags even when replyToMode is "off".
// Per-channel opt-out is possible but the safe default is to allow them.
const allowTagsWhenOff = provider
? Boolean(getChannelDock(provider)?.threading?.allowTagsWhenOff)
: false;
? (getChannelDock(provider)?.threading?.allowTagsWhenOff ?? true)
: true;
return createReplyToModeFilter(mode, {
allowTagsWhenOff,
});

View File

@@ -9,11 +9,8 @@ import { slackOutbound } from "../../channels/plugins/outbound/slack.js";
import { telegramOutbound } from "../../channels/plugins/outbound/telegram.js";
import { whatsappOutbound } from "../../channels/plugins/outbound/whatsapp.js";
import { setActivePluginRegistry } from "../../plugins/runtime.js";
import {
createIMessageTestPlugin,
createOutboundTestPlugin,
createTestRegistry,
} from "../../test-utils/channel-plugins.js";
import { createOutboundTestPlugin, createTestRegistry } from "../../test-utils/channel-plugins.js";
import { createIMessageTestPlugin } from "../../test-utils/imessage-test-plugin.js";
import { SILENT_REPLY_TOKEN } from "../tokens.js";
const mocks = vi.hoisted(() => ({

View File

@@ -5,6 +5,8 @@ import path from "node:path";
import { resolveStateDir } from "../config/paths.js";
import { routeLogsToStderr } from "../logging/console.js";
import { pathExists } from "../utils.js";
import { getCoreCliCommandNames, registerCoreCliByName } from "./program/command-registry.js";
import { getProgramContext } from "./program/program-context.js";
import { getSubCliEntries, registerSubCliByName } from "./program/register.subclis.js";
const COMPLETION_SHELLS = ["zsh", "bash", "powershell", "fish"] as const;
@@ -240,6 +242,16 @@ export function registerCompletionCli(program: Command) {
// the completion script written to stdout.
routeLogsToStderr();
const shell = options.shell ?? "zsh";
// Completion needs the full Commander command tree (including nested subcommands).
// Our CLI defaults to lazy registration for perf; force-register core commands here.
const ctx = getProgramContext(program);
if (ctx) {
for (const name of getCoreCliCommandNames()) {
await registerCoreCliByName(program, ctx, name);
}
}
// Eagerly register all subcommands to build the full tree
const entries = getSubCliEntries();
for (const entry of entries) {

View File

@@ -16,7 +16,7 @@ const report: HookStatusReport = {
handlerPath: "/tmp/hooks/session-memory/handler.js",
hookKey: "session-memory",
emoji: "💾",
homepage: "https://docs.openclaw.ai/hooks#session-memory",
homepage: "https://docs.openclaw.ai/automation/hooks#session-memory",
events: ["command:new"],
always: false,
disabled: false,

View File

@@ -128,6 +128,7 @@ describe("nodes-cli coverage", () => {
agentId: "main",
approved: true,
approvalDecision: "allow-once",
runId: expect.any(String),
});
expect(invoke?.params?.timeoutMs).toBe(5000);
});
@@ -153,6 +154,7 @@ describe("nodes-cli coverage", () => {
agentId: "main",
approved: true,
approvalDecision: "allow-once",
runId: expect.any(String),
});
});

View File

@@ -269,8 +269,11 @@ export function registerNodesInvokeCommands(nodes: Command) {
}
const requiresAsk = hostAsk === "always" || hostAsk === "on-miss";
let approvalId: string | null = null;
if (requiresAsk) {
approvalId = crypto.randomUUID();
const decisionResult = (await callGatewayCli("exec.approval.request", opts, {
id: approvalId,
command: rawCommand ?? argv.join(" "),
cwd: opts.cwd,
host: "node",
@@ -330,6 +333,9 @@ export function registerNodesInvokeCommands(nodes: Command) {
if (approvalDecision) {
(invokeParams.params as Record<string, unknown>).approvalDecision = approvalDecision;
}
if (approvedByAsk && approvalId) {
(invokeParams.params as Record<string, unknown>).runId = approvalId;
}
if (invokeTimeout !== undefined) {
invokeParams.timeoutMs = invokeTimeout;
}

View File

@@ -3,12 +3,14 @@ import { registerProgramCommands } from "./command-registry.js";
import { createProgramContext } from "./context.js";
import { configureProgramHelp } from "./help.js";
import { registerPreActionHooks } from "./preaction.js";
import { setProgramContext } from "./program-context.js";
export function buildProgram() {
const program = new Command();
const ctx = createProgramContext();
const argv = process.argv;
setProgramContext(program, ctx);
configureProgramHelp(program, ctx);
registerPreActionHooks(program, ctx.programVersion);

View File

@@ -1,15 +1,7 @@
import type { Command } from "commander";
import type { ProgramContext } from "./context.js";
import { registerBrowserCli } from "../browser-cli.js";
import { registerConfigCli } from "../config-cli.js";
import { registerMemoryCli } from "../memory-cli.js";
import { registerAgentCommands } from "./register.agent.js";
import { registerConfigureCommand } from "./register.configure.js";
import { registerMaintenanceCommands } from "./register.maintenance.js";
import { registerMessageCommands } from "./register.message.js";
import { registerOnboardCommand } from "./register.onboard.js";
import { registerSetupCommand } from "./register.setup.js";
import { registerStatusHealthSessionsCommands } from "./register.status-health-sessions.js";
import { buildParseArgv, getPrimaryCommand, hasHelpOrVersion } from "../argv.js";
import { resolveActionArgs } from "./helpers.js";
import { registerSubCliCommands } from "./register.subclis.js";
type CommandRegisterParams = {
@@ -23,60 +15,198 @@ export type CommandRegistration = {
register: (params: CommandRegisterParams) => void;
};
export const commandRegistry: CommandRegistration[] = [
type CoreCliEntry = {
commands: Array<{ name: string; description: string }>;
register: (params: CommandRegisterParams) => Promise<void> | void;
};
const shouldRegisterCorePrimaryOnly = (argv: string[]) => {
if (hasHelpOrVersion(argv)) {
return false;
}
return true;
};
const coreEntries: CoreCliEntry[] = [
{
id: "setup",
register: ({ program }) => registerSetupCommand(program),
commands: [{ name: "setup", description: "Setup helpers" }],
register: async ({ program }) => {
const mod = await import("./register.setup.js");
mod.registerSetupCommand(program);
},
},
{
id: "onboard",
register: ({ program }) => registerOnboardCommand(program),
commands: [{ name: "onboard", description: "Onboarding helpers" }],
register: async ({ program }) => {
const mod = await import("./register.onboard.js");
mod.registerOnboardCommand(program);
},
},
{
id: "configure",
register: ({ program }) => registerConfigureCommand(program),
commands: [{ name: "configure", description: "Configure wizard" }],
register: async ({ program }) => {
const mod = await import("./register.configure.js");
mod.registerConfigureCommand(program);
},
},
{
id: "config",
register: ({ program }) => registerConfigCli(program),
commands: [{ name: "config", description: "Config helpers" }],
register: async ({ program }) => {
const mod = await import("../config-cli.js");
mod.registerConfigCli(program);
},
},
{
id: "maintenance",
register: ({ program }) => registerMaintenanceCommands(program),
commands: [{ name: "maintenance", description: "Maintenance commands" }],
register: async ({ program }) => {
const mod = await import("./register.maintenance.js");
mod.registerMaintenanceCommands(program);
},
},
{
id: "message",
register: ({ program, ctx }) => registerMessageCommands(program, ctx),
commands: [{ name: "message", description: "Send, read, and manage messages" }],
register: async ({ program, ctx }) => {
const mod = await import("./register.message.js");
mod.registerMessageCommands(program, ctx);
},
},
{
id: "memory",
register: ({ program }) => registerMemoryCli(program),
commands: [{ name: "memory", description: "Memory commands" }],
register: async ({ program }) => {
const mod = await import("../memory-cli.js");
mod.registerMemoryCli(program);
},
},
{
id: "agent",
register: ({ program, ctx }) =>
registerAgentCommands(program, { agentChannelOptions: ctx.agentChannelOptions }),
commands: [{ name: "agent", description: "Agent commands" }],
register: async ({ program, ctx }) => {
const mod = await import("./register.agent.js");
mod.registerAgentCommands(program, { agentChannelOptions: ctx.agentChannelOptions });
},
},
{
id: "subclis",
register: ({ program, argv }) => registerSubCliCommands(program, argv),
commands: [
{ name: "status", description: "Gateway status" },
{ name: "health", description: "Gateway health" },
{ name: "sessions", description: "Session management" },
],
register: async ({ program }) => {
const mod = await import("./register.status-health-sessions.js");
mod.registerStatusHealthSessionsCommands(program);
},
},
{
id: "status-health-sessions",
register: ({ program }) => registerStatusHealthSessionsCommands(program),
},
{
id: "browser",
register: ({ program }) => registerBrowserCli(program),
commands: [{ name: "browser", description: "Browser tools" }],
register: async ({ program }) => {
const mod = await import("../browser-cli.js");
mod.registerBrowserCli(program);
},
},
];
export function getCoreCliCommandNames(): string[] {
const seen = new Set<string>();
const names: string[] = [];
for (const entry of coreEntries) {
for (const cmd of entry.commands) {
if (seen.has(cmd.name)) {
continue;
}
seen.add(cmd.name);
names.push(cmd.name);
}
}
return names;
}
function removeCommand(program: Command, command: Command) {
const commands = program.commands as Command[];
const index = commands.indexOf(command);
if (index >= 0) {
commands.splice(index, 1);
}
}
function registerLazyCoreCommand(
program: Command,
ctx: ProgramContext,
entry: CoreCliEntry,
command: { name: string; description: string },
) {
const placeholder = program.command(command.name).description(command.description);
placeholder.allowUnknownOption(true);
placeholder.allowExcessArguments(true);
placeholder.action(async (...actionArgs) => {
removeCommand(program, placeholder);
await entry.register({ program, ctx, argv: process.argv });
const actionCommand = actionArgs.at(-1) as Command | undefined;
const root = actionCommand?.parent ?? program;
const rawArgs = (root as Command & { rawArgs?: string[] }).rawArgs;
const actionArgsList = resolveActionArgs(actionCommand);
const fallbackArgv = actionCommand?.name()
? [actionCommand.name(), ...actionArgsList]
: actionArgsList;
const parseArgv = buildParseArgv({
programName: program.name(),
rawArgs,
fallbackArgv,
});
await program.parseAsync(parseArgv);
});
}
export async function registerCoreCliByName(
program: Command,
ctx: ProgramContext,
name: string,
argv: string[] = process.argv,
): Promise<boolean> {
const entry = coreEntries.find((candidate) =>
candidate.commands.some((cmd) => cmd.name === name),
);
if (!entry) {
return false;
}
// Some registrars install multiple top-level commands (e.g. status/health/sessions).
// Remove placeholders/old registrations for all names in the entry before re-registering.
for (const cmd of entry.commands) {
const existing = program.commands.find((c) => c.name() === cmd.name);
if (existing) {
removeCommand(program, existing);
}
}
await entry.register({ program, ctx, argv });
return true;
}
export function registerCoreCliCommands(program: Command, ctx: ProgramContext, argv: string[]) {
const primary = getPrimaryCommand(argv);
if (primary && shouldRegisterCorePrimaryOnly(argv)) {
const entry = coreEntries.find((candidate) =>
candidate.commands.some((cmd) => cmd.name === primary),
);
if (entry) {
const cmd = entry.commands.find((c) => c.name === primary);
if (cmd) {
registerLazyCoreCommand(program, ctx, entry, cmd);
}
return;
}
}
for (const entry of coreEntries) {
for (const cmd of entry.commands) {
registerLazyCoreCommand(program, ctx, entry, cmd);
}
}
}
export function registerProgramCommands(
program: Command,
ctx: ProgramContext,
argv: string[] = process.argv,
) {
for (const entry of commandRegistry) {
entry.register({ program, ctx, argv });
}
registerCoreCliCommands(program, ctx, argv);
registerSubCliCommands(program, argv);
}

View File

@@ -20,11 +20,22 @@ const ALLOWED_INVALID_GATEWAY_SUBCOMMANDS = new Set([
"restart",
]);
let didRunDoctorConfigFlow = false;
let configSnapshotPromise: Promise<Awaited<ReturnType<typeof readConfigFileSnapshot>>> | null =
null;
function formatConfigIssues(issues: Array<{ path: string; message: string }>): string[] {
return issues.map((issue) => `- ${issue.path || "<root>"}: ${issue.message}`);
}
async function getConfigSnapshot() {
// Tests often mutate config fixtures; caching can make those flaky.
if (process.env.VITEST === "true") {
return readConfigFileSnapshot();
}
configSnapshotPromise ??= readConfigFileSnapshot();
return configSnapshotPromise;
}
export async function ensureConfigReady(params: {
runtime: RuntimeEnv;
commandPath?: string[];
@@ -38,7 +49,7 @@ export async function ensureConfigReady(params: {
});
}
const snapshot = await readConfigFileSnapshot();
const snapshot = await getConfigSnapshot();
const commandName = commandPath[0];
const subcommandName = commandPath[1];
const allowInvalid = commandName

View File

@@ -5,8 +5,6 @@ import { defaultRuntime } from "../../runtime.js";
import { getCommandPath, getVerboseFlag, hasHelpOrVersion } from "../argv.js";
import { emitCliBanner } from "../banner.js";
import { resolveCliName } from "../cli-name.js";
import { ensurePluginRegistryLoaded } from "../plugin-registry.js";
import { ensureConfigReady } from "./config-guard.js";
function setProcessTitleForCommand(actionCommand: Command) {
let current: Command = actionCommand;
@@ -48,9 +46,11 @@ export function registerPreActionHooks(program: Command, programVersion: string)
if (commandPath[0] === "doctor" || commandPath[0] === "completion") {
return;
}
const { ensureConfigReady } = await import("./config-guard.js");
await ensureConfigReady({ runtime: defaultRuntime, commandPath });
// Load plugins for commands that need channel access
if (PLUGIN_REQUIRED_COMMANDS.has(commandPath[0])) {
const { ensurePluginRegistryLoaded } = await import("../plugin-registry.js");
ensurePluginRegistryLoaded();
}
});

View File

@@ -0,0 +1,15 @@
import type { Command } from "commander";
import type { ProgramContext } from "./context.js";
const PROGRAM_CONTEXT_SYMBOL: unique symbol = Symbol.for("openclaw.cli.programContext");
export function setProgramContext(program: Command, ctx: ProgramContext): void {
(program as Command & { [PROGRAM_CONTEXT_SYMBOL]?: ProgramContext })[PROGRAM_CONTEXT_SYMBOL] =
ctx;
}
export function getProgramContext(program: Command): ProgramContext | undefined {
return (program as Command & { [PROGRAM_CONTEXT_SYMBOL]?: ProgramContext })[
PROGRAM_CONTEXT_SYMBOL
];
}

View File

@@ -93,9 +93,16 @@ export async function runCli(argv: string[] = process.argv) {
});
const parseArgv = rewriteUpdateFlagArgv(normalizedArgv);
// Register the primary subcommand if one exists (for lazy-loading)
// Register the primary command (builtin or subcli) so help and command parsing
// are correct even with lazy command registration.
const primary = getPrimaryCommand(parseArgv);
if (primary && shouldRegisterPrimarySubcommand(parseArgv)) {
if (primary) {
const { getProgramContext } = await import("./program/program-context.js");
const ctx = getProgramContext(program);
if (ctx) {
const { registerCoreCliByName } = await import("./program/command-registry.js");
await registerCoreCliByName(program, ctx, primary, parseArgv);
}
const { registerSubCliByName } = await import("./program/register.subclis.js");
await registerSubCliByName(program, primary);
}

View File

@@ -54,6 +54,7 @@ describe("buildAuthChoiceOptions", () => {
});
expect(options.some((opt) => opt.value === "minimax-api")).toBe(true);
expect(options.some((opt) => opt.value === "minimax-api-key-cn")).toBe(true);
expect(options.some((opt) => opt.value === "minimax-api-lightning")).toBe(true);
});

View File

@@ -50,7 +50,7 @@ const AUTH_CHOICE_GROUP_DEFS: {
value: "minimax",
label: "MiniMax",
hint: "M2.5 (recommended)",
choices: ["minimax-portal", "minimax-api", "minimax-api-lightning"],
choices: ["minimax-portal", "minimax-api", "minimax-api-key-cn", "minimax-api-lightning"],
},
{
value: "moonshot",
@@ -286,6 +286,11 @@ const BASE_AUTH_CHOICE_OPTIONS: ReadonlyArray<AuthChoiceOption> = [
hint: "Claude, GPT, Gemini via opencode.ai/zen",
},
{ value: "minimax-api", label: "MiniMax M2.5" },
{
value: "minimax-api-key-cn",
label: "MiniMax M2.5 (CN)",
hint: "China endpoint (api.minimaxi.com)",
},
{
value: "minimax-api-lightning",
label: "MiniMax M2.5 Lightning",

View File

@@ -10,7 +10,9 @@ import { applyDefaultModelChoice } from "./auth-choice.default-model.js";
import {
applyAuthProfileConfig,
applyMinimaxApiConfig,
applyMinimaxApiConfigCn,
applyMinimaxApiProviderConfig,
applyMinimaxApiProviderConfigCn,
applyMinimaxConfig,
applyMinimaxProviderConfig,
setMinimaxApiKey,
@@ -97,6 +99,49 @@ export async function applyAuthChoiceMiniMax(
return { config: nextConfig, agentModelOverride };
}
if (params.authChoice === "minimax-api-key-cn") {
const modelId = "MiniMax-M2.5";
let hasCredential = false;
const envKey = resolveEnvApiKey("minimax");
if (envKey) {
const useExisting = await params.prompter.confirm({
message: `Use existing MINIMAX_API_KEY (${envKey.source}, ${formatApiKeyPreview(envKey.apiKey)})?`,
initialValue: true,
});
if (useExisting) {
await setMinimaxApiKey(envKey.apiKey, params.agentDir);
hasCredential = true;
}
}
if (!hasCredential) {
const key = await params.prompter.text({
message: "Enter MiniMax China API key",
validate: validateApiKeyInput,
});
await setMinimaxApiKey(normalizeApiKeyInput(String(key)), params.agentDir);
}
nextConfig = applyAuthProfileConfig(nextConfig, {
profileId: "minimax:default",
provider: "minimax",
mode: "api_key",
});
{
const modelRef = `minimax/${modelId}`;
const applied = await applyDefaultModelChoice({
config: nextConfig,
setDefaultModel: params.setDefaultModel,
defaultModel: modelRef,
applyDefaultConfig: applyMinimaxApiConfigCn,
applyProviderConfig: applyMinimaxApiProviderConfigCn,
noteAgentModel,
prompter: params.prompter,
});
nextConfig = applied.config;
agentModelOverride = applied.agentModelOverride ?? agentModelOverride;
}
return { config: nextConfig, agentModelOverride };
}
if (params.authChoice === "minimax") {
const applied = await applyDefaultModelChoice({
config: nextConfig,

View File

@@ -6,7 +6,11 @@ import type { RuntimeEnv } from "../runtime.js";
import type { WizardPrompter } from "../wizard/prompts.js";
import type { AuthChoice } from "./onboard-types.js";
import { applyAuthChoice, resolvePreferredProviderForAuthChoice } from "./auth-choice.js";
import { ZAI_CODING_CN_BASE_URL, ZAI_CODING_GLOBAL_BASE_URL } from "./onboard-auth.js";
import {
MINIMAX_CN_API_BASE_URL,
ZAI_CODING_CN_BASE_URL,
ZAI_CODING_GLOBAL_BASE_URL,
} from "./onboard-auth.js";
vi.mock("../providers/github-copilot-auth.js", () => ({
githubCopilotLoginCommand: vi.fn(async () => {}),
@@ -209,6 +213,60 @@ describe("applyAuthChoice", () => {
expect(parsed.profiles?.["minimax:default"]?.key).toBe("sk-minimax-test");
});
it("prompts and writes MiniMax API key when selecting minimax-api-key-cn", async () => {
tempStateDir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-auth-"));
process.env.OPENCLAW_STATE_DIR = tempStateDir;
process.env.OPENCLAW_AGENT_DIR = path.join(tempStateDir, "agent");
process.env.PI_CODING_AGENT_DIR = process.env.OPENCLAW_AGENT_DIR;
const text = vi.fn().mockResolvedValue("sk-minimax-test");
const select: WizardPrompter["select"] = vi.fn(
async (params) => params.options[0]?.value as never,
);
const multiselect: WizardPrompter["multiselect"] = vi.fn(async () => []);
const prompter: WizardPrompter = {
intro: vi.fn(noopAsync),
outro: vi.fn(noopAsync),
note: vi.fn(noopAsync),
select,
multiselect,
text,
confirm: vi.fn(async () => false),
progress: vi.fn(() => ({ update: noop, stop: noop })),
};
const runtime: RuntimeEnv = {
log: vi.fn(),
error: vi.fn(),
exit: vi.fn((code: number) => {
throw new Error(`exit:${code}`);
}),
};
const result = await applyAuthChoice({
authChoice: "minimax-api-key-cn",
config: {},
prompter,
runtime,
setDefaultModel: true,
});
expect(text).toHaveBeenCalledWith(
expect.objectContaining({ message: "Enter MiniMax China API key" }),
);
expect(result.config.auth?.profiles?.["minimax:default"]).toMatchObject({
provider: "minimax",
mode: "api_key",
});
expect(result.config.models?.providers?.minimax?.baseUrl).toBe(MINIMAX_CN_API_BASE_URL);
const authProfilePath = authProfilePathFor(requireAgentDir());
const raw = await fs.readFile(authProfilePath, "utf8");
const parsed = JSON.parse(raw) as {
profiles?: Record<string, { key?: string }>;
};
expect(parsed.profiles?.["minimax:default"]?.key).toBe("sk-minimax-test");
});
it("prompts and writes Synthetic API key when selecting synthetic-api-key", async () => {
tempStateDir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-auth-"));
process.env.OPENCLAW_STATE_DIR = tempStateDir;

View File

@@ -34,6 +34,7 @@ const PREFERRED_PROVIDER_BY_AUTH_CHOICE: Partial<Record<AuthChoice, string>> = {
"copilot-proxy": "copilot-proxy",
"minimax-cloud": "minimax",
"minimax-api": "minimax",
"minimax-api-key-cn": "minimax",
"minimax-api-lightning": "minimax",
minimax: "lmstudio",
"opencode-zen": "opencode",

View File

@@ -2,7 +2,8 @@ import { afterEach, beforeEach, describe, expect, it, vi } from "vitest";
import type { RuntimeEnv } from "../runtime.js";
import { signalPlugin } from "../../extensions/signal/src/channel.js";
import { setActivePluginRegistry } from "../plugins/runtime.js";
import { createIMessageTestPlugin, createTestRegistry } from "../test-utils/channel-plugins.js";
import { createTestRegistry } from "../test-utils/channel-plugins.js";
import { createIMessageTestPlugin } from "../test-utils/imessage-test-plugin.js";
const configMocks = vi.hoisted(() => ({
readConfigFileSnapshot: vi.fn(),

View File

@@ -6,6 +6,7 @@ import {
DEFAULT_MINIMAX_CONTEXT_WINDOW,
DEFAULT_MINIMAX_MAX_TOKENS,
MINIMAX_API_BASE_URL,
MINIMAX_CN_API_BASE_URL,
MINIMAX_HOSTED_COST,
MINIMAX_HOSTED_MODEL_ID,
MINIMAX_HOSTED_MODEL_REF,
@@ -148,7 +149,37 @@ export function applyMinimaxHostedConfig(
// MiniMax Anthropic-compatible API (platform.minimax.io/anthropic)
export function applyMinimaxApiProviderConfig(
cfg: OpenClawConfig,
modelId: string = "MiniMax-M2.1",
modelId: string = "MiniMax-M2.5",
): OpenClawConfig {
return applyMinimaxApiProviderConfigWithBaseUrl(cfg, modelId, MINIMAX_API_BASE_URL);
}
export function applyMinimaxApiConfig(
cfg: OpenClawConfig,
modelId: string = "MiniMax-M2.5",
): OpenClawConfig {
return applyMinimaxApiConfigWithBaseUrl(cfg, modelId, MINIMAX_API_BASE_URL);
}
// MiniMax China API (api.minimaxi.com)
export function applyMinimaxApiProviderConfigCn(
cfg: OpenClawConfig,
modelId: string = "MiniMax-M2.5",
): OpenClawConfig {
return applyMinimaxApiProviderConfigWithBaseUrl(cfg, modelId, MINIMAX_CN_API_BASE_URL);
}
export function applyMinimaxApiConfigCn(
cfg: OpenClawConfig,
modelId: string = "MiniMax-M2.5",
): OpenClawConfig {
return applyMinimaxApiConfigWithBaseUrl(cfg, modelId, MINIMAX_CN_API_BASE_URL);
}
function applyMinimaxApiProviderConfigWithBaseUrl(
cfg: OpenClawConfig,
modelId: string,
baseUrl: string,
): OpenClawConfig {
const providers = { ...cfg.models?.providers };
const existingProvider = providers.minimax;
@@ -164,7 +195,7 @@ export function applyMinimaxApiProviderConfig(
const normalizedApiKey = resolvedApiKey?.trim() === "minimax" ? "" : resolvedApiKey;
providers.minimax = {
...existingProviderRest,
baseUrl: MINIMAX_API_BASE_URL,
baseUrl,
api: "anthropic-messages",
...(normalizedApiKey?.trim() ? { apiKey: normalizedApiKey } : {}),
models: mergedModels.length > 0 ? mergedModels : [apiModel],
@@ -189,11 +220,12 @@ export function applyMinimaxApiProviderConfig(
};
}
export function applyMinimaxApiConfig(
function applyMinimaxApiConfigWithBaseUrl(
cfg: OpenClawConfig,
modelId: string = "MiniMax-M2.1",
modelId: string,
baseUrl: string,
): OpenClawConfig {
const next = applyMinimaxApiProviderConfig(cfg, modelId);
const next = applyMinimaxApiProviderConfigWithBaseUrl(cfg, modelId, baseUrl);
return {
...next,
agents: {

View File

@@ -3,6 +3,7 @@ import { QIANFAN_BASE_URL, QIANFAN_DEFAULT_MODEL_ID } from "../agents/models-con
export const DEFAULT_MINIMAX_BASE_URL = "https://api.minimax.io/v1";
export const MINIMAX_API_BASE_URL = "https://api.minimax.io/anthropic";
export const MINIMAX_CN_API_BASE_URL = "https://api.minimaxi.com/anthropic";
export const MINIMAX_HOSTED_MODEL_ID = "MiniMax-M2.1";
export const MINIMAX_HOSTED_MODEL_REF = `minimax/${MINIMAX_HOSTED_MODEL_ID}`;
export const DEFAULT_MINIMAX_CONTEXT_WINDOW = 200000;

View File

@@ -38,7 +38,9 @@ export {
} from "./onboard-auth.config-core.js";
export {
applyMinimaxApiConfig,
applyMinimaxApiConfigCn,
applyMinimaxApiProviderConfig,
applyMinimaxApiProviderConfigCn,
applyMinimaxConfig,
applyMinimaxHostedConfig,
applyMinimaxHostedProviderConfig,
@@ -92,6 +94,7 @@ export {
KIMI_CODING_MODEL_ID,
KIMI_CODING_MODEL_REF,
MINIMAX_API_BASE_URL,
MINIMAX_CN_API_BASE_URL,
MINIMAX_HOSTED_MODEL_ID,
MINIMAX_HOSTED_MODEL_REF,
MOONSHOT_BASE_URL,

View File

@@ -15,7 +15,7 @@ export async function setupInternalHooks(
"Hooks let you automate actions when agent commands are issued.",
"Example: Save session context to memory when you issue /new.",
"",
"Learn more: https://docs.openclaw.ai/hooks",
"Learn more: https://docs.openclaw.ai/automation/hooks",
].join("\n"),
"Hooks",
);

View File

@@ -155,6 +155,72 @@ async function expectApiKeyProfile(params: {
}
describe("onboard (non-interactive): provider auth", () => {
it("stores MiniMax API key and uses global baseUrl by default", async () => {
await withOnboardEnv("openclaw-onboard-minimax-", async ({ configPath, runtime }) => {
await runNonInteractive(
{
nonInteractive: true,
authChoice: "minimax-api",
minimaxApiKey: "sk-minimax-test",
skipHealth: true,
skipChannels: true,
skipSkills: true,
json: true,
},
runtime,
);
const cfg = await readJsonFile<{
auth?: { profiles?: Record<string, { provider?: string; mode?: string }> };
agents?: { defaults?: { model?: { primary?: string } } };
models?: { providers?: Record<string, { baseUrl?: string }> };
}>(configPath);
expect(cfg.auth?.profiles?.["minimax:default"]?.provider).toBe("minimax");
expect(cfg.auth?.profiles?.["minimax:default"]?.mode).toBe("api_key");
expect(cfg.models?.providers?.minimax?.baseUrl).toBe("https://api.minimax.io/anthropic");
expect(cfg.agents?.defaults?.model?.primary).toBe("minimax/MiniMax-M2.5");
await expectApiKeyProfile({
profileId: "minimax:default",
provider: "minimax",
key: "sk-minimax-test",
});
});
}, 60_000);
it("supports MiniMax CN API endpoint auth choice", async () => {
await withOnboardEnv("openclaw-onboard-minimax-cn-", async ({ configPath, runtime }) => {
await runNonInteractive(
{
nonInteractive: true,
authChoice: "minimax-api-key-cn",
minimaxApiKey: "sk-minimax-test",
skipHealth: true,
skipChannels: true,
skipSkills: true,
json: true,
},
runtime,
);
const cfg = await readJsonFile<{
auth?: { profiles?: Record<string, { provider?: string; mode?: string }> };
agents?: { defaults?: { model?: { primary?: string } } };
models?: { providers?: Record<string, { baseUrl?: string }> };
}>(configPath);
expect(cfg.auth?.profiles?.["minimax:default"]?.provider).toBe("minimax");
expect(cfg.auth?.profiles?.["minimax:default"]?.mode).toBe("api_key");
expect(cfg.models?.providers?.minimax?.baseUrl).toBe("https://api.minimaxi.com/anthropic");
expect(cfg.agents?.defaults?.model?.primary).toBe("minimax/MiniMax-M2.5");
await expectApiKeyProfile({
profileId: "minimax:default",
provider: "minimax",
key: "sk-minimax-test",
});
});
}, 60_000);
it("stores Z.AI API key and uses global baseUrl by default", async () => {
await withOnboardEnv("openclaw-onboard-zai-", async ({ configPath, runtime }) => {
await runNonInteractive(

View File

@@ -15,6 +15,7 @@ import {
applyQianfanConfig,
applyKimiCodeConfig,
applyMinimaxApiConfig,
applyMinimaxApiConfigCn,
applyMinimaxConfig,
applyMoonshotConfig,
applyMoonshotConfigCn,
@@ -570,6 +571,7 @@ export async function applyNonInteractiveAuthChoice(params: {
if (
authChoice === "minimax-cloud" ||
authChoice === "minimax-api" ||
authChoice === "minimax-api-key-cn" ||
authChoice === "minimax-api-lightning"
) {
const resolved = await resolveNonInteractiveApiKey({
@@ -592,8 +594,10 @@ export async function applyNonInteractiveAuthChoice(params: {
mode: "api_key",
});
const modelId =
authChoice === "minimax-api-lightning" ? "MiniMax-M2.1-lightning" : "MiniMax-M2.1";
return applyMinimaxApiConfig(nextConfig, modelId);
authChoice === "minimax-api-lightning" ? "MiniMax-M2.5-Lightning" : "MiniMax-M2.5";
return authChoice === "minimax-api-key-cn"
? applyMinimaxApiConfigCn(nextConfig, modelId)
: applyMinimaxApiConfig(nextConfig, modelId);
}
if (authChoice === "minimax") {

View File

@@ -37,6 +37,7 @@ export type AuthChoice =
| "minimax-cloud"
| "minimax"
| "minimax-api"
| "minimax-api-key-cn"
| "minimax-api-lightning"
| "minimax-portal"
| "opencode-zen"

View File

@@ -20,6 +20,10 @@ export type ExecApprovalRecord = {
request: ExecApprovalRequestPayload;
createdAtMs: number;
expiresAtMs: number;
// Caller metadata (best-effort). Used to prevent other clients from replaying an approval id.
requestedByConnId?: string | null;
requestedByDeviceId?: string | null;
requestedByClientId?: string | null;
resolvedAtMs?: number;
decision?: ExecApprovalDecision;
resolvedBy?: string | null;

View File

@@ -3,7 +3,8 @@ import { afterEach, beforeEach, describe, expect, test } from "vitest";
import type { ChannelPlugin } from "../channels/plugins/types.js";
import type { OpenClawConfig } from "../config/config.js";
import { setActivePluginRegistry } from "../plugins/runtime.js";
import { createIMessageTestPlugin, createTestRegistry } from "../test-utils/channel-plugins.js";
import { createTestRegistry } from "../test-utils/channel-plugins.js";
import { createIMessageTestPlugin } from "../test-utils/imessage-test-plugin.js";
import {
extractHookToken,
isHookAgentAllowed,

View File

@@ -0,0 +1,21 @@
import type { ExecApprovalManager } from "./exec-approval-manager.js";
import type { GatewayClient } from "./server-methods/types.js";
import { sanitizeSystemRunParamsForForwarding } from "./node-invoke-system-run-approval.js";
export function sanitizeNodeInvokeParamsForForwarding(opts: {
command: string;
rawParams: unknown;
client: GatewayClient | null;
execApprovalManager?: ExecApprovalManager;
}):
| { ok: true; params: unknown }
| { ok: false; message: string; details?: Record<string, unknown> } {
if (opts.command === "system.run") {
return sanitizeSystemRunParamsForForwarding({
rawParams: opts.rawParams,
client: opts.client,
execApprovalManager: opts.execApprovalManager,
});
}
return { ok: true, params: opts.rawParams };
}

View File

@@ -0,0 +1,237 @@
import type { ExecApprovalManager, ExecApprovalRecord } from "./exec-approval-manager.js";
import type { GatewayClient } from "./server-methods/types.js";
type SystemRunParamsLike = {
command?: unknown;
rawCommand?: unknown;
cwd?: unknown;
env?: unknown;
timeoutMs?: unknown;
needsScreenRecording?: unknown;
agentId?: unknown;
sessionKey?: unknown;
approved?: unknown;
approvalDecision?: unknown;
runId?: unknown;
};
function asRecord(value: unknown): Record<string, unknown> | null {
if (!value || typeof value !== "object" || Array.isArray(value)) {
return null;
}
return value as Record<string, unknown>;
}
function normalizeString(value: unknown): string | null {
if (typeof value !== "string") {
return null;
}
const trimmed = value.trim();
return trimmed ? trimmed : null;
}
function normalizeApprovalDecision(value: unknown): "allow-once" | "allow-always" | null {
const s = normalizeString(value);
return s === "allow-once" || s === "allow-always" ? s : null;
}
function clientHasApprovals(client: GatewayClient | null): boolean {
const scopes = Array.isArray(client?.connect?.scopes) ? client?.connect?.scopes : [];
return scopes.includes("operator.admin") || scopes.includes("operator.approvals");
}
function getCmdText(params: SystemRunParamsLike): string {
const raw = normalizeString(params.rawCommand);
if (raw) {
return raw;
}
if (Array.isArray(params.command)) {
const parts = params.command.map((v) => String(v));
if (parts.length > 0) {
return parts.join(" ");
}
}
return "";
}
function approvalMatchesRequest(params: SystemRunParamsLike, record: ExecApprovalRecord): boolean {
if (record.request.host !== "node") {
return false;
}
const cmdText = getCmdText(params);
if (!cmdText || record.request.command !== cmdText) {
return false;
}
const reqCwd = record.request.cwd ?? null;
const runCwd = normalizeString(params.cwd) ?? null;
if (reqCwd !== runCwd) {
return false;
}
const reqAgentId = record.request.agentId ?? null;
const runAgentId = normalizeString(params.agentId) ?? null;
if (reqAgentId !== runAgentId) {
return false;
}
const reqSessionKey = record.request.sessionKey ?? null;
const runSessionKey = normalizeString(params.sessionKey) ?? null;
if (reqSessionKey !== runSessionKey) {
return false;
}
return true;
}
function pickSystemRunParams(raw: Record<string, unknown>): Record<string, unknown> {
// Defensive allowlist: only forward fields that the node-host `system.run` handler understands.
// This prevents future internal control fields from being smuggled through the gateway.
const next: Record<string, unknown> = {};
for (const key of [
"command",
"rawCommand",
"cwd",
"env",
"timeoutMs",
"needsScreenRecording",
"agentId",
"sessionKey",
"runId",
]) {
if (key in raw) {
next[key] = raw[key];
}
}
return next;
}
/**
* Gate `system.run` approval flags (`approved`, `approvalDecision`) behind a real
* `exec.approval.*` record. This prevents users with only `operator.write` from
* bypassing node-host approvals by injecting control fields into `node.invoke`.
*/
export function sanitizeSystemRunParamsForForwarding(opts: {
rawParams: unknown;
client: GatewayClient | null;
execApprovalManager?: ExecApprovalManager;
nowMs?: number;
}):
| { ok: true; params: unknown }
| { ok: false; message: string; details?: Record<string, unknown> } {
const obj = asRecord(opts.rawParams);
if (!obj) {
return { ok: true, params: opts.rawParams };
}
const p = obj as SystemRunParamsLike;
const approved = p.approved === true;
const requestedDecision = normalizeApprovalDecision(p.approvalDecision);
const wantsApprovalOverride = approved || requestedDecision !== null;
// Always strip control fields from user input. If the override is allowed,
// we re-add trusted fields based on the gateway approval record.
const next: Record<string, unknown> = pickSystemRunParams(obj);
if (!wantsApprovalOverride) {
return { ok: true, params: next };
}
const runId = normalizeString(p.runId);
if (!runId) {
return {
ok: false,
message: "approval override requires params.runId",
details: { code: "MISSING_RUN_ID" },
};
}
const manager = opts.execApprovalManager;
if (!manager) {
return {
ok: false,
message: "exec approvals unavailable",
details: { code: "APPROVALS_UNAVAILABLE" },
};
}
const snapshot = manager.getSnapshot(runId);
if (!snapshot) {
return {
ok: false,
message: "unknown or expired approval id",
details: { code: "UNKNOWN_APPROVAL_ID", runId },
};
}
const nowMs = typeof opts.nowMs === "number" ? opts.nowMs : Date.now();
if (nowMs > snapshot.expiresAtMs) {
return {
ok: false,
message: "approval expired",
details: { code: "APPROVAL_EXPIRED", runId },
};
}
// Prefer binding by device identity (stable across reconnects / per-call clients like callGateway()).
// Fallback to connId only when device identity is not available.
const snapshotDeviceId = snapshot.requestedByDeviceId ?? null;
const clientDeviceId = opts.client?.connect?.device?.id ?? null;
if (snapshotDeviceId) {
if (snapshotDeviceId !== clientDeviceId) {
return {
ok: false,
message: "approval id not valid for this device",
details: { code: "APPROVAL_DEVICE_MISMATCH", runId },
};
}
} else if (
snapshot.requestedByConnId &&
snapshot.requestedByConnId !== (opts.client?.connId ?? null)
) {
return {
ok: false,
message: "approval id not valid for this client",
details: { code: "APPROVAL_CLIENT_MISMATCH", runId },
};
}
if (!approvalMatchesRequest(p, snapshot)) {
return {
ok: false,
message: "approval id does not match request",
details: { code: "APPROVAL_REQUEST_MISMATCH", runId },
};
}
// Normal path: enforce the decision recorded by the gateway.
if (snapshot.decision === "allow-once" || snapshot.decision === "allow-always") {
next.approved = true;
next.approvalDecision = snapshot.decision;
return { ok: true, params: next };
}
// If the approval request timed out (decision=null), allow askFallback-driven
// "allow-once" ONLY for clients that are allowed to use exec approvals.
const timedOut =
snapshot.resolvedAtMs !== undefined &&
snapshot.decision === undefined &&
snapshot.resolvedBy === null;
if (
timedOut &&
approved &&
requestedDecision === "allow-once" &&
clientHasApprovals(opts.client)
) {
next.approved = true;
next.approvalDecision = "allow-once";
return { ok: true, params: next };
}
return {
ok: false,
message: "approval required",
details: { code: "APPROVAL_REQUIRED", runId },
};
}

View File

@@ -1,6 +1,6 @@
import type { GatewayWsClient } from "./server/ws-types.js";
import { MAX_BUFFERED_BYTES } from "./server-constants.js";
import { logWs, summarizeAgentEventForWsLog } from "./ws-log.js";
import { logWs, shouldLogWs, summarizeAgentEventForWsLog } from "./ws-log.js";
const ADMIN_SCOPE = "operator.admin";
const APPROVALS_SCOPE = "operator.approvals";
@@ -43,6 +43,9 @@ export function createGatewayBroadcaster(params: { clients: Set<GatewayWsClient>
},
targetConnIds?: ReadonlySet<string>,
) => {
if (params.clients.size === 0) {
return;
}
const isTargeted = Boolean(targetConnIds);
const eventSeq = isTargeted ? undefined : ++seq;
const frame = JSON.stringify({
@@ -52,19 +55,21 @@ export function createGatewayBroadcaster(params: { clients: Set<GatewayWsClient>
seq: eventSeq,
stateVersion: opts?.stateVersion,
});
const logMeta: Record<string, unknown> = {
event,
seq: eventSeq ?? "targeted",
clients: params.clients.size,
targets: targetConnIds ? targetConnIds.size : undefined,
dropIfSlow: opts?.dropIfSlow,
presenceVersion: opts?.stateVersion?.presence,
healthVersion: opts?.stateVersion?.health,
};
if (event === "agent") {
Object.assign(logMeta, summarizeAgentEventForWsLog(payload));
if (shouldLogWs()) {
const logMeta: Record<string, unknown> = {
event,
seq: eventSeq ?? "targeted",
clients: params.clients.size,
targets: targetConnIds ? targetConnIds.size : undefined,
dropIfSlow: opts?.dropIfSlow,
presenceVersion: opts?.stateVersion?.presence,
healthVersion: opts?.stateVersion?.health,
};
if (event === "agent") {
Object.assign(logMeta, summarizeAgentEventForWsLog(payload));
}
logWs("out", "event", logMeta);
}
logWs("out", "event", logMeta);
for (const c of params.clients) {
if (targetConnIds && !targetConnIds.has(c.connId)) {
continue;

View File

@@ -15,7 +15,7 @@ export function createExecApprovalHandlers(
opts?: { forwarder?: ExecApprovalForwarder },
): GatewayRequestHandlers {
return {
"exec.approval.request": async ({ params, respond, context }) => {
"exec.approval.request": async ({ params, respond, context, client }) => {
if (!validateExecApprovalRequestParams(params)) {
respond(
false,
@@ -64,6 +64,9 @@ export function createExecApprovalHandlers(
sessionKey: p.sessionKey ?? null,
};
const record = manager.create(request, timeoutMs, explicitId);
record.requestedByConnId = client?.connId ?? null;
record.requestedByDeviceId = client?.connect?.device?.id ?? null;
record.requestedByClientId = client?.connect?.client?.id ?? null;
// Use register() to synchronously add to pending map before sending any response.
// This ensures the approval ID is valid immediately after the "accepted" response.
let decisionPromise: Promise<

View File

@@ -10,6 +10,7 @@ import {
verifyNodeToken,
} from "../../infra/node-pairing.js";
import { isNodeCommandAllowed, resolveNodeCommandAllowlist } from "../node-command-policy.js";
import { sanitizeNodeInvokeParamsForForwarding } from "../node-invoke-sanitize.js";
import {
ErrorCodes,
errorShape,
@@ -361,7 +362,7 @@ export const nodeHandlers: GatewayRequestHandlers = {
);
});
},
"node.invoke": async ({ params, respond, context }) => {
"node.invoke": async ({ params, respond, context, client }) => {
if (!validateNodeInvokeParams(params)) {
respondInvalidParams({
respond,
@@ -417,10 +418,26 @@ export const nodeHandlers: GatewayRequestHandlers = {
);
return;
}
const forwardedParams = sanitizeNodeInvokeParamsForForwarding({
command,
rawParams: p.params,
client,
execApprovalManager: context.execApprovalManager,
});
if (!forwardedParams.ok) {
respond(
false,
undefined,
errorShape(ErrorCodes.INVALID_REQUEST, forwardedParams.message, {
details: forwardedParams.details ?? null,
}),
);
return;
}
const res = await context.nodeRegistry.invoke({
nodeId,
command,
params: p.params,
params: forwardedParams.params,
timeoutMs: p.timeoutMs,
idempotencyKey: p.idempotencyKey,
});

View File

@@ -5,6 +5,7 @@ import type { CronService } from "../../cron/service.js";
import type { createSubsystemLogger } from "../../logging/subsystem.js";
import type { WizardSession } from "../../wizard/session.js";
import type { ChatAbortControllerEntry } from "../chat-abort.js";
import type { ExecApprovalManager } from "../exec-approval-manager.js";
import type { NodeRegistry } from "../node-registry.js";
import type { ConnectParams, ErrorShape, RequestFrame } from "../protocol/index.js";
import type { ChannelRuntimeSnapshot } from "../server-channels.js";
@@ -28,6 +29,7 @@ export type GatewayRequestContext = {
deps: ReturnType<typeof createDefaultDeps>;
cron: CronService;
cronStorePath: string;
execApprovalManager?: ExecApprovalManager;
loadGatewayModelCatalog: () => Promise<ModelCatalogEntry[]>;
getHealthCache: () => HealthSummary | null;
refreshHealthSnapshot: (opts?: { probe?: boolean }) => Promise<HealthSummary>;

View File

@@ -565,6 +565,7 @@ export async function startGatewayServer(
deps,
cron,
cronStorePath,
execApprovalManager,
loadGatewayModelCatalog,
getHealthCache,
refreshHealthSnapshot: refreshGatewayHealthSnapshot,

View File

@@ -0,0 +1,281 @@
import crypto from "node:crypto";
import { afterAll, beforeAll, describe, expect, test } from "vitest";
import { WebSocket } from "ws";
import {
deriveDeviceIdFromPublicKey,
publicKeyRawBase64UrlFromPem,
signDevicePayload,
} from "../infra/device-identity.js";
import { sleep } from "../utils.js";
import { GATEWAY_CLIENT_MODES, GATEWAY_CLIENT_NAMES } from "../utils/message-channel.js";
import { GatewayClient } from "./client.js";
import { buildDeviceAuthPayload } from "./device-auth.js";
import {
connectReq,
installGatewayTestHooks,
rpcReq,
startServerWithClient,
} from "./test-helpers.js";
installGatewayTestHooks({ scope: "suite" });
describe("node.invoke approval bypass", () => {
let server: Awaited<ReturnType<typeof startServerWithClient>>["server"];
let port: number;
beforeAll(async () => {
const started = await startServerWithClient("secret", { controlUiEnabled: true });
server = started.server;
port = started.port;
});
afterAll(async () => {
await server.close();
});
const connectOperator = async (scopes: string[]) => {
const ws = new WebSocket(`ws://127.0.0.1:${port}`);
await new Promise<void>((resolve) => ws.once("open", resolve));
const res = await connectReq(ws, { token: "secret", scopes });
expect(res.ok).toBe(true);
return ws;
};
const connectOperatorWithNewDevice = async (scopes: string[]) => {
const { publicKey, privateKey } = crypto.generateKeyPairSync("ed25519");
const publicKeyPem = publicKey.export({ type: "spki", format: "pem" }).toString();
const privateKeyPem = privateKey.export({ type: "pkcs8", format: "pem" }).toString();
const publicKeyRaw = publicKeyRawBase64UrlFromPem(publicKeyPem);
const deviceId = deriveDeviceIdFromPublicKey(publicKeyRaw);
expect(deviceId).toBeTruthy();
const signedAtMs = Date.now();
const payload = buildDeviceAuthPayload({
deviceId: deviceId!,
clientId: GATEWAY_CLIENT_NAMES.TEST,
clientMode: GATEWAY_CLIENT_MODES.TEST,
role: "operator",
scopes,
signedAtMs,
token: "secret",
});
const ws = new WebSocket(`ws://127.0.0.1:${port}`);
await new Promise<void>((resolve) => ws.once("open", resolve));
const res = await connectReq(ws, {
token: "secret",
scopes,
device: {
id: deviceId!,
publicKey: publicKeyRaw,
signature: signDevicePayload(privateKeyPem, payload),
signedAt: signedAtMs,
},
});
expect(res.ok).toBe(true);
return ws;
};
const connectLinuxNode = async (onInvoke: (payload: unknown) => void) => {
let readyResolve: (() => void) | null = null;
const ready = new Promise<void>((resolve) => {
readyResolve = resolve;
});
const client = new GatewayClient({
url: `ws://127.0.0.1:${port}`,
connectDelayMs: 0,
token: "secret",
role: "node",
clientName: GATEWAY_CLIENT_NAMES.NODE_HOST,
clientVersion: "1.0.0",
platform: "linux",
mode: GATEWAY_CLIENT_MODES.NODE,
scopes: [],
commands: ["system.run"],
onHelloOk: () => readyResolve?.(),
onEvent: (evt) => {
if (evt.event !== "node.invoke.request") {
return;
}
onInvoke(evt.payload);
const payload = evt.payload as {
id?: string;
nodeId?: string;
};
const id = typeof payload?.id === "string" ? payload.id : "";
const nodeId = typeof payload?.nodeId === "string" ? payload.nodeId : "";
if (!id || !nodeId) {
return;
}
void client.request("node.invoke.result", {
id,
nodeId,
ok: true,
payloadJSON: JSON.stringify({ ok: true }),
});
},
});
client.start();
await Promise.race([
ready,
sleep(10_000).then(() => {
throw new Error("timeout waiting for node to connect");
}),
]);
return client;
};
test("rejects injecting approved/approvalDecision without approval id", async () => {
let sawInvoke = false;
const node = await connectLinuxNode(() => {
sawInvoke = true;
});
const ws = await connectOperator(["operator.write"]);
const nodes = await rpcReq<{ nodes?: Array<{ nodeId: string; connected?: boolean }> }>(
ws,
"node.list",
{},
);
expect(nodes.ok).toBe(true);
const nodeId = nodes.payload?.nodes?.find((n) => n.connected)?.nodeId ?? "";
expect(nodeId).toBeTruthy();
const res = await rpcReq(ws, "node.invoke", {
nodeId,
command: "system.run",
params: {
command: ["echo", "hi"],
rawCommand: "echo hi",
approved: true,
approvalDecision: "allow-once",
},
idempotencyKey: crypto.randomUUID(),
});
expect(res.ok).toBe(false);
expect(res.error?.message ?? "").toContain("params.runId");
// Ensure the node didn't receive the invoke (gateway should fail early).
await sleep(50);
expect(sawInvoke).toBe(false);
ws.close();
node.stop();
});
test("binds system.run approval flags to exec.approval decision (ignores caller escalation)", async () => {
let lastInvokeParams: Record<string, unknown> | null = null;
const node = await connectLinuxNode((payload) => {
const obj = payload as { paramsJSON?: unknown };
const raw = typeof obj?.paramsJSON === "string" ? obj.paramsJSON : "";
if (!raw) {
lastInvokeParams = null;
return;
}
lastInvokeParams = JSON.parse(raw) as Record<string, unknown>;
});
const ws = await connectOperator(["operator.write", "operator.approvals"]);
const ws2 = await connectOperator(["operator.write"]);
const nodes = await rpcReq<{ nodes?: Array<{ nodeId: string; connected?: boolean }> }>(
ws,
"node.list",
{},
);
expect(nodes.ok).toBe(true);
const nodeId = nodes.payload?.nodes?.find((n) => n.connected)?.nodeId ?? "";
expect(nodeId).toBeTruthy();
const approvalId = crypto.randomUUID();
const requestP = rpcReq(ws, "exec.approval.request", {
id: approvalId,
command: "echo hi",
cwd: null,
host: "node",
timeoutMs: 30_000,
});
await rpcReq(ws, "exec.approval.resolve", { id: approvalId, decision: "allow-once" });
const requested = await requestP;
expect(requested.ok).toBe(true);
// Use a second WebSocket connection to simulate per-call clients (callGatewayTool/callGatewayCli).
// Approval binding should be based on device identity, not the ephemeral connId.
const invoke = await rpcReq(ws2, "node.invoke", {
nodeId,
command: "system.run",
params: {
command: ["echo", "hi"],
rawCommand: "echo hi",
runId: approvalId,
approved: true,
// Try to escalate to allow-always; gateway should clamp to allow-once from record.
approvalDecision: "allow-always",
injected: "nope",
},
idempotencyKey: crypto.randomUUID(),
});
expect(invoke.ok).toBe(true);
expect(lastInvokeParams).toBeTruthy();
expect(lastInvokeParams?.approved).toBe(true);
expect(lastInvokeParams?.approvalDecision).toBe("allow-once");
expect(lastInvokeParams?.injected).toBeUndefined();
ws.close();
ws2.close();
node.stop();
});
test("rejects replaying approval id from another device", async () => {
let sawInvoke = false;
const node = await connectLinuxNode(() => {
sawInvoke = true;
});
const ws = await connectOperator(["operator.write", "operator.approvals"]);
const wsOtherDevice = await connectOperatorWithNewDevice(["operator.write"]);
const nodes = await rpcReq<{ nodes?: Array<{ nodeId: string; connected?: boolean }> }>(
ws,
"node.list",
{},
);
expect(nodes.ok).toBe(true);
const nodeId = nodes.payload?.nodes?.find((n) => n.connected)?.nodeId ?? "";
expect(nodeId).toBeTruthy();
const approvalId = crypto.randomUUID();
const requestP = rpcReq(ws, "exec.approval.request", {
id: approvalId,
command: "echo hi",
cwd: null,
host: "node",
timeoutMs: 30_000,
});
await rpcReq(ws, "exec.approval.resolve", { id: approvalId, decision: "allow-once" });
const requested = await requestP;
expect(requested.ok).toBe(true);
const invoke = await rpcReq(wsOtherDevice, "node.invoke", {
nodeId,
command: "system.run",
params: {
command: ["echo", "hi"],
rawCommand: "echo hi",
runId: approvalId,
approved: true,
approvalDecision: "allow-once",
},
idempotencyKey: crypto.randomUUID(),
});
expect(invoke.ok).toBe(false);
expect(invoke.error?.message ?? "").toContain("not valid for this device");
await sleep(50);
expect(sawInvoke).toBe(false);
ws.close();
wsOtherDevice.close();
node.stop();
});
});

View File

@@ -175,6 +175,110 @@ type TranscriptMessage = {
provenance?: unknown;
};
export function readSessionTitleFieldsFromTranscript(
sessionId: string,
storePath: string | undefined,
sessionFile?: string,
agentId?: string,
opts?: { includeInterSession?: boolean },
): { firstUserMessage: string | null; lastMessagePreview: string | null } {
const candidates = resolveSessionTranscriptCandidates(sessionId, storePath, sessionFile, agentId);
const filePath = candidates.find((p) => fs.existsSync(p));
if (!filePath) {
return { firstUserMessage: null, lastMessagePreview: null };
}
let fd: number | null = null;
try {
fd = fs.openSync(filePath, "r");
const stat = fs.fstatSync(fd);
const size = stat.size;
if (size === 0) {
return { firstUserMessage: null, lastMessagePreview: null };
}
// Head (first user message)
let firstUserMessage: string | null = null;
try {
const buf = Buffer.alloc(8192);
const bytesRead = fs.readSync(fd, buf, 0, buf.length, 0);
if (bytesRead > 0) {
const chunk = buf.toString("utf-8", 0, bytesRead);
const lines = chunk.split(/\r?\n/).slice(0, MAX_LINES_TO_SCAN);
for (const line of lines) {
if (!line.trim()) {
continue;
}
try {
const parsed = JSON.parse(line);
const msg = parsed?.message as TranscriptMessage | undefined;
if (msg?.role !== "user") {
continue;
}
if (opts?.includeInterSession !== true && hasInterSessionUserProvenance(msg)) {
continue;
}
const text = extractTextFromContent(msg.content);
if (text) {
firstUserMessage = text;
break;
}
} catch {
// skip malformed lines
}
}
}
} catch {
// ignore head read errors
}
// Tail (last message preview)
let lastMessagePreview: string | null = null;
try {
const readStart = Math.max(0, size - LAST_MSG_MAX_BYTES);
const readLen = Math.min(size, LAST_MSG_MAX_BYTES);
const buf = Buffer.alloc(readLen);
fs.readSync(fd, buf, 0, readLen, readStart);
const chunk = buf.toString("utf-8");
const lines = chunk.split(/\r?\n/).filter((l) => l.trim());
const tailLines = lines.slice(-LAST_MSG_MAX_LINES);
for (let i = tailLines.length - 1; i >= 0; i--) {
const line = tailLines[i];
try {
const parsed = JSON.parse(line);
const msg = parsed?.message as TranscriptMessage | undefined;
if (msg?.role !== "user" && msg?.role !== "assistant") {
continue;
}
const text = extractTextFromContent(msg.content);
if (text) {
lastMessagePreview = text;
break;
}
} catch {
// skip malformed
}
}
} catch {
// ignore tail read errors
}
return { firstUserMessage, lastMessagePreview };
} catch {
return { firstUserMessage: null, lastMessagePreview: null };
} finally {
if (fd !== null) {
try {
fs.closeSync(fd);
} catch {
/* ignore */
}
}
}
}
function extractTextFromContent(content: TranscriptMessage["content"]): string | null {
if (typeof content === "string") {
return content.trim() || null;

View File

@@ -33,10 +33,7 @@ import {
} from "../routing/session-key.js";
import { isCronRunSessionKey } from "../sessions/session-key-utils.js";
import { normalizeSessionDeliveryFields } from "../utils/delivery-context.js";
import {
readFirstUserMessageFromTranscript,
readLastMessagePreviewFromTranscript,
} from "./session-utils.fs.js";
import { readSessionTitleFieldsFromTranscript } from "./session-utils.fs.js";
export {
archiveFileOnDisk,
@@ -44,6 +41,7 @@ export {
capArrayByJsonBytes,
readFirstUserMessageFromTranscript,
readLastMessagePreviewFromTranscript,
readSessionTitleFieldsFromTranscript,
readSessionPreviewItemsFromTranscript,
readSessionMessages,
resolveSessionTranscriptCandidates,
@@ -817,22 +815,21 @@ export function listSessionsFromStore(params: {
let derivedTitle: string | undefined;
let lastMessagePreview: string | undefined;
if (entry?.sessionId) {
if (includeDerivedTitles) {
const firstUserMsg = readFirstUserMessageFromTranscript(
if (includeDerivedTitles || includeLastMessage) {
const parsed = parseAgentSessionKey(s.key);
const agentId =
parsed && parsed.agentId ? normalizeAgentId(parsed.agentId) : resolveDefaultAgentId(cfg);
const fields = readSessionTitleFieldsFromTranscript(
entry.sessionId,
storePath,
entry.sessionFile,
agentId,
);
derivedTitle = deriveSessionTitle(entry, firstUserMsg);
}
if (includeLastMessage) {
const lastMsg = readLastMessagePreviewFromTranscript(
entry.sessionId,
storePath,
entry.sessionFile,
);
if (lastMsg) {
lastMessagePreview = lastMsg;
if (includeDerivedTitles) {
derivedTitle = deriveSessionTitle(entry, fields.firstUserMessage);
}
if (includeLastMessage && fields.lastMessagePreview) {
lastMessagePreview = fields.lastMessagePreview;
}
}
}

View File

@@ -279,6 +279,68 @@ describe("POST /tools/invoke", () => {
expect(res.status).toBe(404);
});
it("allows gateway tool via HTTP when explicitly enabled in gateway.tools.allow", async () => {
testState.agentsConfig = {
list: [{ id: "main", tools: { allow: ["gateway"] } }],
// oxlint-disable-next-line typescript/no-explicit-any
} as any;
const { writeConfigFile } = await import("../config/config.js");
await writeConfigFile({
gateway: { tools: { allow: ["gateway"] } },
// oxlint-disable-next-line typescript/no-explicit-any
} as any);
const token = resolveGatewayToken();
try {
const res = await invokeTool({
port: sharedPort,
tool: "gateway",
headers: { authorization: `Bearer ${token}` },
sessionKey: "main",
});
// Ensure we didn't hit the HTTP deny list (404). Invalid args should map to 400.
expect(res.status).toBe(400);
const body = await res.json();
expect(body.ok).toBe(false);
expect(body.error?.type).toBe("tool_error");
} finally {
await writeConfigFile({
// oxlint-disable-next-line typescript/no-explicit-any
} as any);
}
});
it("treats gateway.tools.deny as higher priority than gateway.tools.allow", async () => {
testState.agentsConfig = {
list: [{ id: "main", tools: { allow: ["gateway"] } }],
// oxlint-disable-next-line typescript/no-explicit-any
} as any;
const { writeConfigFile } = await import("../config/config.js");
await writeConfigFile({
gateway: { tools: { allow: ["gateway"], deny: ["gateway"] } },
// oxlint-disable-next-line typescript/no-explicit-any
} as any);
const token = resolveGatewayToken();
try {
const res = await invokeTool({
port: sharedPort,
tool: "gateway",
headers: { authorization: `Bearer ${token}` },
sessionKey: "main",
});
expect(res.status).toBe(404);
} finally {
await writeConfigFile({
// oxlint-disable-next-line typescript/no-explicit-any
} as any);
}
});
it("uses the configured main session key when sessionKey is missing or main", async () => {
testState.agentsConfig = {
list: [

View File

@@ -22,6 +22,7 @@ import { logWarn } from "../logger.js";
import { isTestDefaultMemorySlotDisabled } from "../plugins/config-state.js";
import { getPluginToolMeta } from "../plugins/tools.js";
import { isSubagentSessionKey } from "../routing/session-key.js";
import { DEFAULT_GATEWAY_HTTP_TOOL_DENY } from "../security/dangerous-tools.js";
import { normalizeMessageChannel } from "../utils/message-channel.js";
import { authorizeGatewayConnect, type ResolvedGatewayAuth } from "./auth.js";
import {
@@ -36,22 +37,6 @@ import { getBearerToken, getHeader } from "./http-utils.js";
const DEFAULT_BODY_BYTES = 2 * 1024 * 1024;
const MEMORY_TOOL_NAMES = new Set(["memory_search", "memory_get"]);
/**
* Tools denied via HTTP /tools/invoke regardless of session policy.
* Prevents RCE and privilege escalation from HTTP API surface.
* Configurable via gateway.tools.{deny,allow} in openclaw.json.
*/
const DEFAULT_GATEWAY_HTTP_TOOL_DENY = [
// Session orchestration — spawning agents remotely is RCE
"sessions_spawn",
// Cross-session injection — message injection across sessions
"sessions_send",
// Gateway control plane — prevents gateway reconfiguration via HTTP
"gateway",
// Interactive setup — requires terminal QR scan, hangs on HTTP
"whatsapp_login",
];
type ToolsInvokeBody = {
tool?: unknown;
action?: unknown;
@@ -345,9 +330,12 @@ export async function handleToolsInvokeHttpRequest(
// Gateway HTTP-specific deny list — applies to ALL sessions via HTTP.
const gatewayToolsCfg = cfg.gateway?.tools;
const gatewayDenyNames = DEFAULT_GATEWAY_HTTP_TOOL_DENY.filter(
const defaultGatewayDeny: string[] = DEFAULT_GATEWAY_HTTP_TOOL_DENY.filter(
(name) => !gatewayToolsCfg?.allow?.includes(name),
).concat(Array.isArray(gatewayToolsCfg?.deny) ? gatewayToolsCfg.deny : []);
);
const gatewayDenyNames = defaultGatewayDeny.concat(
Array.isArray(gatewayToolsCfg?.deny) ? gatewayToolsCfg.deny : [],
);
const gatewayDenySet = new Set(gatewayDenyNames);
const gatewayFiltered = subagentFiltered.filter((t) => !gatewayDenySet.has(t.name));

View File

@@ -25,6 +25,10 @@ const wsInflightOptimized = new Map<string, number>();
const wsInflightSince = new Map<string, number>();
const wsLog = createSubsystemLogger("gateway/ws");
export function shouldLogWs(): boolean {
return shouldLogSubsystemToConsole("gateway/ws");
}
export function shortId(value: string): string {
const s = value.trim();
if (UUID_RE.test(s)) {

View File

@@ -81,7 +81,7 @@ session-memory/
---
name: my-hook
description: "Short description"
homepage: https://docs.openclaw.ai/hooks#my-hook
homepage: https://docs.openclaw.ai/automation/hooks#my-hook
metadata:
{ "openclaw": { "emoji": "🔗", "events": ["command:new"], "requires": { "bins": ["node"] } } }
---
@@ -220,4 +220,4 @@ Test your hooks by:
## Documentation
Full documentation: https://docs.openclaw.ai/hooks
Full documentation: https://docs.openclaw.ai/automation/hooks

View File

@@ -1,7 +1,7 @@
---
name: boot-md
description: "Run BOOT.md on gateway startup"
homepage: https://docs.openclaw.ai/hooks#boot-md
homepage: https://docs.openclaw.ai/automation/hooks#boot-md
metadata:
{
"openclaw":

View File

@@ -1,7 +1,7 @@
---
name: command-logger
description: "Log all command events to a centralized audit file"
homepage: https://docs.openclaw.ai/hooks#command-logger
homepage: https://docs.openclaw.ai/automation/hooks#command-logger
metadata:
{
"openclaw":

View File

@@ -1,7 +1,7 @@
---
name: session-memory
description: "Save session context to memory when /new command is issued"
homepage: https://docs.openclaw.ai/hooks#session-memory
homepage: https://docs.openclaw.ai/automation/hooks#session-memory
metadata:
{
"openclaw":

View File

@@ -233,7 +233,7 @@ describe("resolveOpenClawMetadata", () => {
const content = `---
name: session-memory
description: "Save session context to memory when /new command is issued"
homepage: https://docs.openclaw.ai/hooks#session-memory
homepage: https://docs.openclaw.ai/automation/hooks#session-memory
metadata:
{
"openclaw":

View File

@@ -5,11 +5,8 @@ import { telegramOutbound } from "../../channels/plugins/outbound/telegram.js";
import { whatsappOutbound } from "../../channels/plugins/outbound/whatsapp.js";
import { setActivePluginRegistry } from "../../plugins/runtime.js";
import { markdownToSignalTextChunks } from "../../signal/format.js";
import {
createIMessageTestPlugin,
createOutboundTestPlugin,
createTestRegistry,
} from "../../test-utils/channel-plugins.js";
import { createOutboundTestPlugin, createTestRegistry } from "../../test-utils/channel-plugins.js";
import { createIMessageTestPlugin } from "../../test-utils/imessage-test-plugin.js";
const mocks = vi.hoisted(() => ({
appendAssistantMessageToSessionTranscript: vi.fn(async () => ({ ok: true, sessionFile: "x" })),

View File

@@ -9,11 +9,8 @@ import { telegramPlugin } from "../../../extensions/telegram/src/channel.js";
import { whatsappPlugin } from "../../../extensions/whatsapp/src/channel.js";
import { jsonResult } from "../../agents/tools/common.js";
import { setActivePluginRegistry } from "../../plugins/runtime.js";
import {
createIMessageTestPlugin,
createOutboundTestPlugin,
createTestRegistry,
} from "../../test-utils/channel-plugins.js";
import { createOutboundTestPlugin, createTestRegistry } from "../../test-utils/channel-plugins.js";
import { createIMessageTestPlugin } from "../../test-utils/imessage-test-plugin.js";
import { loadWebMedia } from "../../web/media.js";
import { runMessageAction } from "./message-action-runner.js";

View File

@@ -1,7 +1,8 @@
import { afterEach, beforeEach, describe, expect, it, vi } from "vitest";
import type { ChannelOutboundAdapter, ChannelPlugin } from "../../channels/plugins/types.js";
import { setActivePluginRegistry } from "../../plugins/runtime.js";
import { createIMessageTestPlugin, createTestRegistry } from "../../test-utils/channel-plugins.js";
import { createTestRegistry } from "../../test-utils/channel-plugins.js";
import { createIMessageTestPlugin } from "../../test-utils/imessage-test-plugin.js";
import { sendMessage, sendPoll } from "./message.js";
const setRegistry = (registry: ReturnType<typeof createTestRegistry>) => {

View File

@@ -68,7 +68,13 @@ vi.mock("../logging/subsystem.js", () => ({
},
}));
vi.mock("node:child_process", () => ({ spawn: vi.fn() }));
vi.mock(import("node:child_process"), async (importOriginal) => {
const actual = await importOriginal<typeof import("node:child_process")>();
return {
...actual,
spawn: vi.fn(),
};
});
import { spawn as mockedSpawn } from "node:child_process";
import type { OpenClawConfig } from "../config/config.js";

View File

@@ -160,6 +160,57 @@ type BindingScope = {
memberRoleIds: Set<string>;
};
type EvaluatedBindingsCache = {
bindingsRef: OpenClawConfig["bindings"];
byChannelAccount: Map<string, EvaluatedBinding[]>;
};
const evaluatedBindingsCacheByCfg = new WeakMap<OpenClawConfig, EvaluatedBindingsCache>();
const MAX_EVALUATED_BINDINGS_CACHE_KEYS = 2000;
function getEvaluatedBindingsForChannelAccount(
cfg: OpenClawConfig,
channel: string,
accountId: string,
): EvaluatedBinding[] {
const bindingsRef = cfg.bindings;
const existing = evaluatedBindingsCacheByCfg.get(cfg);
const cache =
existing && existing.bindingsRef === bindingsRef
? existing
: { bindingsRef, byChannelAccount: new Map<string, EvaluatedBinding[]>() };
if (cache !== existing) {
evaluatedBindingsCacheByCfg.set(cfg, cache);
}
const cacheKey = `${channel}\t${accountId}`;
const hit = cache.byChannelAccount.get(cacheKey);
if (hit) {
return hit;
}
const evaluated: EvaluatedBinding[] = listBindings(cfg).flatMap((binding) => {
if (!binding || typeof binding !== "object") {
return [];
}
if (!matchesChannel(binding.match, channel)) {
return [];
}
if (!matchesAccountId(binding.match?.accountId, accountId)) {
return [];
}
return [{ binding, match: normalizeBindingMatch(binding.match) }];
});
cache.byChannelAccount.set(cacheKey, evaluated);
if (cache.byChannelAccount.size > MAX_EVALUATED_BINDINGS_CACHE_KEYS) {
cache.byChannelAccount.clear();
cache.byChannelAccount.set(cacheKey, evaluated);
}
return evaluated;
}
function normalizePeerConstraint(
peer: { kind?: string; id?: string } | undefined,
): NormalizedPeerConstraint {
@@ -242,18 +293,7 @@ export function resolveAgentRoute(input: ResolveAgentRouteInput): ResolvedAgentR
const memberRoleIds = input.memberRoleIds ?? [];
const memberRoleIdSet = new Set(memberRoleIds);
const bindings: EvaluatedBinding[] = listBindings(input.cfg).flatMap((binding) => {
if (!binding || typeof binding !== "object") {
return [];
}
if (!matchesChannel(binding.match, channel)) {
return [];
}
if (!matchesAccountId(binding.match?.accountId, accountId)) {
return [];
}
return [{ binding, match: normalizeBindingMatch(binding.match) }];
});
const bindings = getEvaluatedBindingsForChannelAccount(input.cfg, channel, accountId);
const dmScope = input.cfg.session?.dmScope ?? "main";
const identityLinks = input.cfg.session?.identityLinks;

View File

@@ -153,6 +153,53 @@ describe("security audit", () => {
).toBe(true);
});
it("warns when gateway.tools.allow re-enables dangerous HTTP /tools/invoke tools (loopback)", async () => {
const cfg: OpenClawConfig = {
gateway: {
bind: "loopback",
auth: { token: "secret" },
tools: { allow: ["sessions_spawn"] },
},
};
const res = await runSecurityAudit({
config: cfg,
env: {},
includeFilesystem: false,
includeChannelSecurity: false,
});
expect(
res.findings.some(
(f) => f.checkId === "gateway.tools_invoke_http.dangerous_allow" && f.severity === "warn",
),
).toBe(true);
});
it("flags dangerous gateway.tools.allow over HTTP as critical when gateway binds beyond loopback", async () => {
const cfg: OpenClawConfig = {
gateway: {
bind: "lan",
auth: { token: "secret" },
tools: { allow: ["sessions_spawn", "gateway"] },
},
};
const res = await runSecurityAudit({
config: cfg,
env: {},
includeFilesystem: false,
includeChannelSecurity: false,
});
expect(
res.findings.some(
(f) =>
f.checkId === "gateway.tools_invoke_http.dangerous_allow" && f.severity === "critical",
),
).toBe(true);
});
it("does not warn for auth rate limiting when configured", async () => {
const cfg: OpenClawConfig = {
gateway: {

View File

@@ -33,6 +33,7 @@ import {
formatPermissionRemediation,
inspectPathPermissions,
} from "./audit-fs.js";
import { DEFAULT_GATEWAY_HTTP_TOOL_DENY } from "./dangerous-tools.js";
export type SecurityAuditSeverity = "info" | "warn" | "critical";
@@ -258,6 +259,34 @@ function collectGatewayConfigFindings(
(auth.mode === "token" && hasToken) || (auth.mode === "password" && hasPassword);
const hasTailscaleAuth = auth.allowTailscale && tailscaleMode === "serve";
const hasGatewayAuth = hasSharedSecret || hasTailscaleAuth;
// HTTP /tools/invoke is intended for narrow automation, not session orchestration/admin operations.
// If operators opt-in to re-enabling these tools over HTTP, warn loudly so the choice is explicit.
const gatewayToolsAllowRaw = Array.isArray(cfg.gateway?.tools?.allow)
? cfg.gateway?.tools?.allow
: [];
const gatewayToolsAllow = new Set(
gatewayToolsAllowRaw
.map((v) => (typeof v === "string" ? v.trim().toLowerCase() : ""))
.filter(Boolean),
);
const reenabledOverHttp = DEFAULT_GATEWAY_HTTP_TOOL_DENY.filter((name) =>
gatewayToolsAllow.has(name),
);
if (reenabledOverHttp.length > 0) {
const extraRisk = bind !== "loopback" || tailscaleMode === "funnel";
findings.push({
checkId: "gateway.tools_invoke_http.dangerous_allow",
severity: extraRisk ? "critical" : "warn",
title: "Gateway HTTP /tools/invoke re-enables dangerous tools",
detail:
`gateway.tools.allow includes ${reenabledOverHttp.join(", ")} which removes them from the default HTTP deny list. ` +
"This can allow remote session spawning / control-plane actions via HTTP and increases RCE blast radius if the gateway is reachable.",
remediation:
"Remove these entries from gateway.tools.allow (recommended). " +
"If you keep them enabled, keep gateway.bind loopback-only (or tailnet-only), restrict network exposure, and treat the gateway token/password as full-admin.",
});
}
if (bind !== "loopback" && !hasSharedSecret && auth.mode !== "trusted-proxy") {
findings.push({
checkId: "gateway.bind_no_auth",

View File

@@ -0,0 +1,37 @@
// Shared tool-risk constants.
// Keep these centralized so gateway HTTP restrictions, security audits, and ACP prompts don't drift.
/**
* Tools denied via Gateway HTTP `POST /tools/invoke` by default.
* These are high-risk because they enable session orchestration, control-plane actions,
* or interactive flows that don't make sense over a non-interactive HTTP surface.
*/
export const DEFAULT_GATEWAY_HTTP_TOOL_DENY = [
// Session orchestration — spawning agents remotely is RCE
"sessions_spawn",
// Cross-session injection — message injection across sessions
"sessions_send",
// Gateway control plane — prevents gateway reconfiguration via HTTP
"gateway",
// Interactive setup — requires terminal QR scan, hangs on HTTP
"whatsapp_login",
] as const;
/**
* ACP tools that should always require explicit user approval.
* ACP is an automation surface; we never want "silent yes" for mutating/execution tools.
*/
export const DANGEROUS_ACP_TOOL_NAMES = [
"exec",
"spawn",
"shell",
"sessions_spawn",
"sessions_send",
"gateway",
"fs_write",
"fs_delete",
"fs_move",
"apply_patch",
] as const;
export const DANGEROUS_ACP_TOOLS = new Set<string>(DANGEROUS_ACP_TOOL_NAMES);

View File

@@ -10,9 +10,13 @@ import {
} from "./sticker-cache.js";
// Mock the state directory to use a temp location
vi.mock("../config/paths.js", () => ({
STATE_DIR: "/tmp/openclaw-test-sticker-cache",
}));
vi.mock("../config/paths.js", async (importOriginal) => {
const actual = await importOriginal<typeof import("../config/paths.js")>();
return {
...actual,
STATE_DIR: "/tmp/openclaw-test-sticker-cache",
};
});
const TEST_CACHE_DIR = "/tmp/openclaw-test-sticker-cache/telegram";
const TEST_CACHE_FILE = path.join(TEST_CACHE_DIR, "sticker-cache.json");

View File

@@ -5,8 +5,6 @@ import type {
ChannelPlugin,
} from "../channels/plugins/types.js";
import type { PluginRegistry } from "../plugins/registry.js";
import { imessageOutbound } from "../channels/plugins/outbound/imessage.js";
import { normalizeIMessageHandle } from "../imessage/targets.js";
export const createTestRegistry = (channels: PluginRegistry["channels"] = []): PluginRegistry => ({
plugins: [],
@@ -24,62 +22,6 @@ export const createTestRegistry = (channels: PluginRegistry["channels"] = []): P
diagnostics: [],
});
export const createIMessageTestPlugin = (params?: {
outbound?: ChannelOutboundAdapter;
}): ChannelPlugin => ({
id: "imessage",
meta: {
id: "imessage",
label: "iMessage",
selectionLabel: "iMessage (imsg)",
docsPath: "/channels/imessage",
blurb: "iMessage test stub.",
aliases: ["imsg"],
},
capabilities: { chatTypes: ["direct", "group"], media: true },
config: {
listAccountIds: () => [],
resolveAccount: () => ({}),
},
status: {
collectStatusIssues: (accounts) =>
accounts.flatMap((account) => {
const lastError = typeof account.lastError === "string" ? account.lastError.trim() : "";
if (!lastError) {
return [];
}
return [
{
channel: "imessage",
accountId: account.accountId,
kind: "runtime",
message: `Channel error: ${lastError}`,
},
];
}),
},
outbound: params?.outbound ?? imessageOutbound,
messaging: {
targetResolver: {
looksLikeId: (raw) => {
const trimmed = raw.trim();
if (!trimmed) {
return false;
}
if (/^(imessage:|sms:|auto:|chat_id:|chat_guid:|chat_identifier:)/i.test(trimmed)) {
return true;
}
if (trimmed.includes("@")) {
return true;
}
return /^\+?\d{3,}$/.test(trimmed);
},
hint: "<handle|chat_id:ID>",
},
normalizeTarget: (raw) => normalizeIMessageHandle(raw),
},
});
export const createOutboundTestPlugin = (params: {
id: ChannelId;
outbound: ChannelOutboundAdapter;

View File

@@ -0,0 +1,59 @@
import type { ChannelOutboundAdapter, ChannelPlugin } from "../channels/plugins/types.js";
import { imessageOutbound } from "../channels/plugins/outbound/imessage.js";
import { normalizeIMessageHandle } from "../imessage/targets.js";
export const createIMessageTestPlugin = (params?: {
outbound?: ChannelOutboundAdapter;
}): ChannelPlugin => ({
id: "imessage",
meta: {
id: "imessage",
label: "iMessage",
selectionLabel: "iMessage (imsg)",
docsPath: "/channels/imessage",
blurb: "iMessage test stub.",
aliases: ["imsg"],
},
capabilities: { chatTypes: ["direct", "group"], media: true },
config: {
listAccountIds: () => [],
resolveAccount: () => ({}),
},
status: {
collectStatusIssues: (accounts) =>
accounts.flatMap((account) => {
const lastError = typeof account.lastError === "string" ? account.lastError.trim() : "";
if (!lastError) {
return [];
}
return [
{
channel: "imessage",
accountId: account.accountId,
kind: "runtime",
message: `Channel error: ${lastError}`,
},
];
}),
},
outbound: params?.outbound ?? imessageOutbound,
messaging: {
targetResolver: {
looksLikeId: (raw) => {
const trimmed = raw.trim();
if (!trimmed) {
return false;
}
if (/^(imessage:|sms:|auto:|chat_id:|chat_guid:|chat_identifier:)/i.test(trimmed)) {
return true;
}
if (trimmed.includes("@")) {
return true;
}
return /^\+?\d{3,}$/.test(trimmed);
},
hint: "<handle|chat_id:ID>",
},
normalizeTarget: (raw) => normalizeIMessageHandle(raw),
},
});

View File

@@ -6,6 +6,9 @@ import * as tts from "./tts.js";
vi.mock("@mariozechner/pi-ai", () => ({
completeSimple: vi.fn(),
// Some auth helpers import oauth provider metadata at module load time.
getOAuthProviders: () => [],
getOAuthApiKey: vi.fn(async () => null),
}));
vi.mock("../agents/pi-embedded-runner/model.js", () => ({

View File

@@ -2,11 +2,9 @@ import { defineConfig } from "vitest/config";
import baseConfig from "./vitest.config.ts";
const baseTest = (baseConfig as { test?: { include?: string[]; exclude?: string[] } }).test ?? {};
const include = baseTest.include ?? [
"src/**/*.test.ts",
"extensions/**/*.test.ts",
"test/format-error.test.ts",
];
const include = (
baseTest.include ?? ["src/**/*.test.ts", "extensions/**/*.test.ts", "test/format-error.test.ts"]
).filter((pattern) => !pattern.includes("extensions/"));
const exclude = baseTest.exclude ?? [];
export default defineConfig({