docs(markdownlint): enable autofixable rules and normalize links

This commit is contained in:
Sebastian
2026-02-06 09:55:12 -05:00
parent 1bf9f237f7
commit c7aec0660e
84 changed files with 261 additions and 198 deletions

View File

@@ -110,9 +110,11 @@ Details: [Gateway protocol](/gateway/protocol), [Pairing](/start/pairing),
- Preferred: Tailscale or VPN.
- Alternative: SSH tunnel
```bash
ssh -N -L 18789:127.0.0.1:18789 user@host
```
- The same handshake + auth token apply over the tunnel.
- TLS + optional pinning can be enabled for WS in remote setups.

View File

@@ -301,7 +301,7 @@ Common intents (copy/paste):
}
```
2. Allow only specific groups (WhatsApp)
1. Allow only specific groups (WhatsApp)
```json5
{
@@ -316,7 +316,7 @@ Common intents (copy/paste):
}
```
3. Allow all groups but require mention (explicit)
1. Allow all groups but require mention (explicit)
```json5
{
@@ -328,7 +328,7 @@ Common intents (copy/paste):
}
```
4. Only the owner can trigger in groups (WhatsApp)
1. Only the owner can trigger in groups (WhatsApp)
```json5
{

View File

@@ -302,8 +302,8 @@ Why OpenAI batch is fast + cheap:
- For large backfills, OpenAI is typically the fastest option we support because we can submit many embedding requests in a single batch job and let OpenAI process them asynchronously.
- OpenAI offers discounted pricing for Batch API workloads, so large indexing runs are usually cheaper than sending the same requests synchronously.
- See the OpenAI Batch API docs and pricing for details:
- https://platform.openai.com/docs/api-reference/batch
- https://platform.openai.com/pricing
- [https://platform.openai.com/docs/api-reference/batch](https://platform.openai.com/docs/api-reference/batch)
- [https://platform.openai.com/pricing](https://platform.openai.com/pricing)
Config example:
@@ -382,11 +382,11 @@ Implementation sketch:
- **Vector**: top `maxResults * candidateMultiplier` by cosine similarity.
- **BM25**: top `maxResults * candidateMultiplier` by FTS5 BM25 rank (lower is better).
2. Convert BM25 rank into a 0..1-ish score:
1. Convert BM25 rank into a 0..1-ish score:
- `textScore = 1 / (1 + max(0, bm25Rank))`
3. Union candidates by chunk id and compute a weighted score:
1. Union candidates by chunk id and compute a weighted score:
- `finalScore = vectorWeight * vectorScore + textWeight * textScore`

View File

@@ -136,14 +136,14 @@ Moonshot uses OpenAI-compatible endpoints, so configure it as a custom provider:
Kimi K2 model IDs:
{/_ moonshot-kimi-k2-model-refs:start _/ && null}
{/_moonshot-kimi-k2-model-refs:start_/ && null}
- `moonshot/kimi-k2.5`
- `moonshot/kimi-k2-0905-preview`
- `moonshot/kimi-k2-turbo-preview`
- `moonshot/kimi-k2-thinking`
- `moonshot/kimi-k2-thinking-turbo`
{/_ moonshot-kimi-k2-model-refs:end _/ && null}
{/_moonshot-kimi-k2-model-refs:end_/ && null}
```json5
{
@@ -242,7 +242,7 @@ Ollama is a local LLM runtime that provides an OpenAI-compatible API:
- Provider: `ollama`
- Auth: None required (local server)
- Example model: `ollama/llama3.3`
- Installation: https://ollama.ai
- Installation: [https://ollama.ai](https://ollama.ai)
```bash
# Install Ollama, then pull a model:

View File

@@ -110,6 +110,6 @@ This keeps the base prompt small while still enabling targeted skill usage.
When available, the system prompt includes a **Documentation** section that points to the
local OpenClaw docs directory (either `docs/` in the repo workspace or the bundled npm
package docs) and also notes the public mirror, source repo, community Discord, and
ClawHub (https://clawhub.com) for skills discovery. The prompt instructs the model to consult local docs first
ClawHub ([https://clawhub.com](https://clawhub.com)) for skills discovery. The prompt instructs the model to consult local docs first
for OpenClaw behavior, commands, configuration, or architecture, and to run
`openclaw status` itself when possible (asking the user only when it lacks access).

View File

@@ -217,7 +217,7 @@ export type SystemEchoParams = Static<typeof SystemEchoParamsSchema>;
export type SystemEchoResult = Static<typeof SystemEchoResultSchema>;
```
2. **Validation**
1. **Validation**
In `src/gateway/protocol/index.ts`, export an AJV validator:
@@ -225,7 +225,7 @@ In `src/gateway/protocol/index.ts`, export an AJV validator:
export const validateSystemEchoParams = ajv.compile<SystemEchoParams>(SystemEchoParamsSchema);
```
3. **Server behavior**
1. **Server behavior**
Add a handler in `src/gateway/server-methods/system.ts`:
@@ -241,13 +241,13 @@ export const systemHandlers: GatewayRequestHandlers = {
Register it in `src/gateway/server-methods.ts` (already merges `systemHandlers`),
then add `"system.echo"` to `METHODS` in `src/gateway/server.ts`.
4. **Regenerate**
1. **Regenerate**
```bash
pnpm protocol:check
```
5. **Tests + docs**
1. **Tests + docs**
Add a server test in `src/gateway/server.*.test.ts` and note the method in docs.
@@ -280,7 +280,7 @@ Unknown frame types are preserved as raw payloads for forward compatibility.
Generated JSON Schema is in the repo at `dist/protocol.schema.json`. The
published raw file is typically available at:
- https://raw.githubusercontent.com/openclaw/openclaw/main/dist/protocol.schema.json
- [https://raw.githubusercontent.com/openclaw/openclaw/main/dist/protocol.schema.json](https://raw.githubusercontent.com/openclaw/openclaw/main/dist/protocol.schema.json)
## When you change schemas