feat(feishu): add markdown tables, positional insert, color_text, and table ops (#29411)

* feat(feishu): add markdown tables, insert, color_text, table ops, and image fixes

Extends feishu_doc on top of #20304 with capabilities that are not yet covered:

Markdown → native table rendering:
- write/append now use the Descendant API instead of Children API,
  enabling GFM markdown tables (block_type 31/32) to render as native
  Feishu tables automatically
- Adaptive column widths calculated from cell content (CJK chars 2x weight)
- Batch insertion for large documents (>1000 blocks, docx-batch-insert.ts)

New actions:
- insert: positional markdown insertion after a given block_id
- color_text: apply color/bold to a text block via [red]...[/red] markup
- insert_table_row / insert_table_column: add rows or columns to a table
- delete_table_rows / delete_table_columns: remove rows or columns
- merge_table_cells: merge a rectangular cell range

Image upload fixes (affects write, append, and upload_image):
- upload_image now accepts data URI and plain base64 in addition to
  url/file_path, covering DALL-E b64_json, canvas screenshots, etc.
- Fix: pass Buffer directly to drive.media.uploadAll instead of
  Readable.from(), which caused Content-Length mismatch for large images
- Fix: same Readable bug fixed in upload_file
- Fix: pass drive_route_token via extra field for correct multi-datacenter
  routing (per API docs: required when parent_node is a document block ID)

* fix(feishu): add documentBlockDescendant mock to docx.test.ts

write/append now use the Descendant API (documentBlockDescendant.create)
instead of Children API. The existing test mock was missing this SDK
method, causing processImages to never be reached and fetchRemoteMedia
to go uncalled.

Added blockDescendantCreateMock returning an image block so the
'skips image upload when markdown image URL is blocked' test flows
through processImages as expected.

* fix(feishu): address bot review feedback

- resolveUploadInput: remove length < 1024 guard on file path detection.
  Prefix patterns (isAbsolute / ~ / ./ / ../) already correctly distinguish
  file paths from base64 strings at any length. The old guard caused file
  paths ≥1024 chars to fall through to the base64 branch incorrectly.

- parseColorMarkup: add comment clarifying that mismatched closing tags
  (e.g. [red]text[/green]) are intentional — opening tag style is applied,
  closing tag is consumed regardless of name.

* fix(feishu): address second-round codex bot review feedback

P1 - Reject single oversized subtrees in batch insert (docx-batch-insert.ts):
  A first-level block whose descendant count exceeds BATCH_SIZE (1000) cannot
  be split atomically (e.g. a very large table). Previously such a block was
  silently added to the current batch and sent as an oversized request,
  violating the API limit. Now throws a descriptive error so callers know to
  reduce the content size.

P2 - Preserve unmatched brackets in color markup parser (docx-color-text.ts):
  Text like 'Revenue [Q1] up' contains a bracket pair with no matching '[/...]'
  closer. The original regex dropped the '[' character in this case, silently
  corrupting the text. Fixed by appending '|\[' to the plain-text alternative
  so any '[' that does not open a complete tag is captured as literal text.

* fix(feishu): address third-round codex bot review feedback

P2 - Throw ENOENT for non-existing absolute image paths (docx.ts):
  Previously a non-existing absolute path like /tmp/missing.png fell
  through to Buffer.from(..., 'base64') and uploaded garbage bytes.
  Now throws a descriptive ENOENT error and hints at data URI format
  for callers intending to pass JPEG binary data (which starts with /9j/).

P2 - Fail clearly when insert anchor block is not found (docx.ts):
  insertDoc previously set insertIndex to -1 (append) when after_block_id
  was absent from the parent's child list, silently inserting at the wrong
  position. Two fixes:
  1. Paginate through all children (documentBlockChildren.get returns up to
     200 per page) before searching for the anchor.
  2. Throw a descriptive error if after_block_id is still not found after
     full pagination, instead of silently falling back to append.

* fix(feishu): address fourth-round codex bot review feedback

- Enforce mutual exclusivity across all three upload sources (url, file_path,
  image): throw immediately when more than one is provided, instead of silently
  preferring the image branch and ignoring the others.
- Validate plain base64 payloads before decoding: reject strings that contain
  characters outside the standard base64 alphabet ([A-Za-z0-9+/=]) so that
  malformed inputs fail fast with a clear error rather than decoding to garbage
  bytes and producing an opaque Feishu API failure downstream.
  Also throw if the decoded buffer is empty.

* fix(feishu): address fifth-round codex bot review feedback

- parseColorMarkup: restrict opening tag regex to known colour/style names
  (bg:*, bold, red, orange, yellow, green, blue, purple, grey/gray) so that
  ordinary bracket tokens like [Q1] can no longer consume a subsequent real
  closing tag ([/red]) and corrupt the surrounding styled spans.  Unknown tags
  now fall through to the plain-text alternatives and are emitted literally.
- resolveUploadInput: estimate decoded byte count from base64 input length
  (ceil(len * 3 / 4)) BEFORE allocating the full Buffer, preventing oversized
  payloads from spiking memory before the maxBytes limit is enforced.  Applies
  to both the data-URI branch and the plain-base64 branch.

* fix(feishu): address sixth-round codex bot review feedback

- docx-table-ops: apply MIN/MAX_COLUMN_WIDTH clamping in the empty-table
  branch so tables with 15+ columns don't produce sub-50 widths that Feishu
  rejects as invalid column_width values.
- docx.ts (data URI branch): validate the ';base64' marker before decoding
  so plain/URL-encoded data URIs are rejected with a clear error; also validate
  the payload against the base64 alphabet (same guard already applied in the
  plain-base64 branch) so malformed inputs fail fast rather than producing
  opaque downstream Feishu errors.

* Feishu: align docx descendant insertion tests and changelog

---------

Co-authored-by: Tak Hoffman <781889+Takhoffman@users.noreply.github.com>
This commit is contained in:
Elarwei
2026-02-28 23:58:56 +08:00
committed by GitHub
parent 4ad49de89d
commit 0740fb83d7
7 changed files with 1082 additions and 58 deletions

View File

@@ -78,6 +78,7 @@ Docs: https://docs.openclaw.ai
- FS/Sandbox workspace boundaries: add a dedicated `outside-workspace` safe-open error code for root-escape checks, and propagate specific outside-workspace messages across edit/browser/media consumers instead of generic not-found/invalid-path fallbacks. (#29715) Thanks @YuzuruS.
- Config/Doctor group allowlist diagnostics: align `groupPolicy: "allowlist"` warnings with per-channel runtime semantics by excluding Google Chat sender-list checks and by warning when no-fallback channels (for example iMessage) omit `groupAllowFrom`, with regression coverage. (#28477) Thanks @tonydehnke.
- Onboarding/Custom providers: use Azure OpenAI-specific verification auth/payload shape (`api-key`, deployment-path chat completions payload) when probing Azure endpoints so valid Azure custom-provider setup no longer fails preflight. (#29421) Thanks @kunalk16.
- Feishu/Docx editing tools: add `feishu_doc` positional insert, table row/column operations, table-cell merge, and color-text updates; switch markdown write/append/insert to Descendant API insertion with large-document batching; and harden image uploads for data URI/base64/local-path inputs with strict validation and routing-safe upload metadata. (#29411) Thanks @Elarwei001.
## 2026.2.26

View File

@@ -17,6 +17,14 @@ export const FeishuDocSchema = Type.Union([
doc_token: Type.String({ description: "Document token" }),
content: Type.String({ description: "Markdown content to append to end of document" }),
}),
Type.Object({
action: Type.Literal("insert"),
doc_token: Type.String({ description: "Document token" }),
content: Type.String({ description: "Markdown content to insert" }),
after_block_id: Type.String({
description: "Insert content after this block ID. Use list_blocks to find block IDs.",
}),
}),
Type.Object({
action: Type.Literal("create"),
title: Type.String({ description: "Document title" }),
@@ -50,6 +58,7 @@ export const FeishuDocSchema = Type.Union([
doc_token: Type.String({ description: "Document token" }),
block_id: Type.String({ description: "Block ID" }),
}),
// Table creation (explicit structure)
Type.Object({
action: Type.Literal("create_table"),
doc_token: Type.String({ description: "Document token" }),
@@ -91,11 +100,60 @@ export const FeishuDocSchema = Type.Union([
minItems: 1,
}),
}),
// Table row/column manipulation
Type.Object({
action: Type.Literal("insert_table_row"),
doc_token: Type.String({ description: "Document token" }),
block_id: Type.String({ description: "Table block ID" }),
row_index: Type.Optional(
Type.Number({ description: "Row index to insert at (-1 for end, default: -1)" }),
),
}),
Type.Object({
action: Type.Literal("insert_table_column"),
doc_token: Type.String({ description: "Document token" }),
block_id: Type.String({ description: "Table block ID" }),
column_index: Type.Optional(
Type.Number({ description: "Column index to insert at (-1 for end, default: -1)" }),
),
}),
Type.Object({
action: Type.Literal("delete_table_rows"),
doc_token: Type.String({ description: "Document token" }),
block_id: Type.String({ description: "Table block ID" }),
row_start: Type.Number({ description: "Start row index (0-based)" }),
row_count: Type.Optional(Type.Number({ description: "Number of rows to delete (default: 1)" })),
}),
Type.Object({
action: Type.Literal("delete_table_columns"),
doc_token: Type.String({ description: "Document token" }),
block_id: Type.String({ description: "Table block ID" }),
column_start: Type.Number({ description: "Start column index (0-based)" }),
column_count: Type.Optional(
Type.Number({ description: "Number of columns to delete (default: 1)" }),
),
}),
Type.Object({
action: Type.Literal("merge_table_cells"),
doc_token: Type.String({ description: "Document token" }),
block_id: Type.String({ description: "Table block ID" }),
row_start: Type.Number({ description: "Start row index" }),
row_end: Type.Number({ description: "End row index (exclusive)" }),
column_start: Type.Number({ description: "Start column index" }),
column_end: Type.Number({ description: "End column index (exclusive)" }),
}),
// Image / file upload
Type.Object({
action: Type.Literal("upload_image"),
doc_token: Type.String({ description: "Document token" }),
url: Type.Optional(Type.String({ description: "Remote image URL (http/https)" })),
file_path: Type.Optional(Type.String({ description: "Local image file path" })),
image: Type.Optional(
Type.String({
description:
"Image as data URI (data:image/png;base64,...) or plain base64 string. Use instead of url/file_path for DALL-E outputs, canvas screenshots, etc.",
}),
),
parent_block_id: Type.Optional(
Type.String({ description: "Parent block ID (default: document root)" }),
),
@@ -117,6 +175,16 @@ export const FeishuDocSchema = Type.Union([
),
filename: Type.Optional(Type.String({ description: "Optional filename override" })),
}),
// Text color / style
Type.Object({
action: Type.Literal("color_text"),
doc_token: Type.String({ description: "Document token" }),
block_id: Type.String({ description: "Text block ID to update" }),
content: Type.String({
description:
'Text with color markup. Tags: [red], [green], [blue], [orange], [yellow], [purple], [grey], [bold], [bg:yellow]. Example: "Revenue [green]+15%[/green] YoY"',
}),
}),
]);
export type FeishuDocParams = Static<typeof FeishuDocSchema>;

View File

@@ -0,0 +1,190 @@
/**
* Batch insertion for large Feishu documents (>1000 blocks).
*
* The Feishu Descendant API has a limit of 1000 blocks per request.
* This module handles splitting large documents into batches while
* preserving parent-child relationships between blocks.
*/
import type * as Lark from "@larksuiteoapi/node-sdk";
import { cleanBlocksForDescendant } from "./docx-table-ops.js";
export const BATCH_SIZE = 1000; // Feishu API limit per request
type Logger = { info?: (msg: string) => void };
/**
* Collect all descendant blocks for a given set of first-level block IDs.
* Recursively traverses the block tree to gather all children.
*/
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- SDK block types
function collectDescendants(blocks: any[], firstLevelIds: string[]): any[] {
const blockMap = new Map<string, any>();
for (const block of blocks) {
blockMap.set(block.block_id, block);
}
const result: any[] = [];
const visited = new Set<string>();
function collect(blockId: string) {
if (visited.has(blockId)) return;
visited.add(blockId);
const block = blockMap.get(blockId);
if (!block) return;
result.push(block);
// Recursively collect children
const children = block.children;
if (Array.isArray(children)) {
for (const childId of children) {
collect(childId);
}
} else if (typeof children === "string") {
collect(children);
}
}
for (const id of firstLevelIds) {
collect(id);
}
return result;
}
/**
* Insert a single batch of blocks using Descendant API.
*
* @param parentBlockId - Parent block to insert into (defaults to docToken)
* @param index - Position within parent's children (-1 = end)
*/
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- SDK block types
async function insertBatch(
client: Lark.Client,
docToken: string,
blocks: any[],
firstLevelBlockIds: string[],
parentBlockId: string = docToken,
index: number = -1,
): Promise<any[]> {
const descendants = cleanBlocksForDescendant(blocks);
if (descendants.length === 0) {
return [];
}
const res = await client.docx.documentBlockDescendant.create({
path: { document_id: docToken, block_id: parentBlockId },
data: {
children_id: firstLevelBlockIds,
descendants,
index,
},
});
if (res.code !== 0) {
throw new Error(`${res.msg} (code: ${res.code})`);
}
return res.data?.children ?? [];
}
/**
* Insert blocks in batches for large documents (>1000 blocks).
*
* Batches are split to ensure BOTH children_id AND descendants
* arrays stay under the 1000 block API limit.
*
* @param client - Feishu API client
* @param docToken - Document ID
* @param blocks - All blocks from Convert API
* @param firstLevelBlockIds - IDs of top-level blocks to insert
* @param logger - Optional logger for progress updates
* @param parentBlockId - Parent block to insert into (defaults to docToken = document root)
* @param startIndex - Starting position within parent (-1 = end). For multi-batch inserts,
* each batch advances this by the number of first-level IDs inserted so far.
* @returns Inserted children blocks and any skipped block IDs
*/
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- SDK block types
export async function insertBlocksInBatches(
client: Lark.Client,
docToken: string,
blocks: any[],
firstLevelBlockIds: string[],
logger?: Logger,
parentBlockId: string = docToken,
startIndex: number = -1,
): Promise<{ children: any[]; skipped: string[] }> {
const allChildren: any[] = [];
// Build batches ensuring each batch has ≤1000 total descendants
const batches: { firstLevelIds: string[]; blocks: any[] }[] = [];
let currentBatch: { firstLevelIds: string[]; blocks: any[] } = { firstLevelIds: [], blocks: [] };
const usedBlockIds = new Set<string>();
for (const firstLevelId of firstLevelBlockIds) {
const descendants = collectDescendants(blocks, [firstLevelId]);
const newBlocks = descendants.filter((b) => !usedBlockIds.has(b.block_id));
// A single block whose subtree exceeds the API limit cannot be split
// (a table or other compound block must be inserted atomically).
if (newBlocks.length > BATCH_SIZE) {
throw new Error(
`Block "${firstLevelId}" has ${newBlocks.length} descendants, which exceeds the ` +
`Feishu API limit of ${BATCH_SIZE} blocks per request. ` +
`Please split the content into smaller sections.`,
);
}
// If adding this first-level block would exceed limit, start new batch
if (
currentBatch.blocks.length + newBlocks.length > BATCH_SIZE &&
currentBatch.blocks.length > 0
) {
batches.push(currentBatch);
currentBatch = { firstLevelIds: [], blocks: [] };
}
// Add to current batch
currentBatch.firstLevelIds.push(firstLevelId);
for (const block of newBlocks) {
currentBatch.blocks.push(block);
usedBlockIds.add(block.block_id);
}
}
// Don't forget the last batch
if (currentBatch.blocks.length > 0) {
batches.push(currentBatch);
}
// Insert each batch, advancing index for position-aware inserts.
// When startIndex == -1 (append to end), each batch appends after the previous.
// When startIndex >= 0, each batch starts at startIndex + count of first-level IDs already inserted.
let currentIndex = startIndex;
for (let i = 0; i < batches.length; i++) {
const batch = batches[i];
logger?.info?.(
`feishu_doc: Inserting batch ${i + 1}/${batches.length} (${batch.blocks.length} blocks)...`,
);
const children = await insertBatch(
client,
docToken,
batch.blocks,
batch.firstLevelIds,
parentBlockId,
currentIndex,
);
allChildren.push(...children);
// Advance index only for explicit positions; -1 always means "after last inserted"
if (currentIndex !== -1) {
currentIndex += batch.firstLevelIds.length;
}
}
return { children: allChildren, skipped: [] };
}

View File

@@ -0,0 +1,149 @@
/**
* Colored text support for Feishu documents.
*
* Parses a simple color markup syntax and updates a text block
* with native Feishu text_run color styles.
*
* Syntax: [color]text[/color]
* Supported colors: red, orange, yellow, green, blue, purple, grey
*
* Example:
* "Revenue [green]+15%[/green] YoY, Costs [red]-3%[/red]"
*/
import type * as Lark from "@larksuiteoapi/node-sdk";
// Feishu text_color values (1-7)
const TEXT_COLOR: Record<string, number> = {
red: 1, // Pink (closest to red in Feishu)
orange: 2,
yellow: 3,
green: 4,
blue: 5,
purple: 6,
grey: 7,
gray: 7,
};
// Feishu background_color values (1-15)
const BACKGROUND_COLOR: Record<string, number> = {
red: 1,
orange: 2,
yellow: 3,
green: 4,
blue: 5,
purple: 6,
grey: 7,
gray: 7,
};
interface Segment {
text: string;
textColor?: number;
bgColor?: number;
bold?: boolean;
}
/**
* Parse color markup into segments.
*
* Supports:
* [red]text[/red] → red text
* [bg:yellow]text[/bg] → yellow background
* [bold]text[/bold] → bold
* [green bold]text[/green] → green + bold
*/
export function parseColorMarkup(content: string): Segment[] {
const segments: Segment[] = [];
// Only [known_tag]...[/...] pairs are treated as markup. Using an open
// pattern like \[([^\]]+)\] would match any bracket token — e.g. [Q1] —
// and cause it to consume a later real closing tag ([/red]), silently
// corrupting the surrounding styled spans. Restricting the opening tag to
// the set of recognised colour/style names prevents that: [Q1] does not
// match the tag alternative and each of its characters falls through to the
// plain-text alternatives instead.
//
// Closing tag name is still not validated against the opening tag:
// [red]text[/green] is treated as [red]text[/red] — opening style applies
// and the closing tag is consumed regardless of its name.
const KNOWN = "(?:bg:[a-z]+|bold|red|orange|yellow|green|blue|purple|gr[ae]y)";
const tagPattern = new RegExp(
`\\[(${KNOWN}(?:\\s+${KNOWN})*)\\](.*?)\\[\\/(?:[^\\]]+)\\]|([^[]+|\\[)`,
"gis",
);
let match;
while ((match = tagPattern.exec(content)) !== null) {
if (match[3] !== undefined) {
// Plain text segment
if (match[3]) {
segments.push({ text: match[3] });
}
} else {
// Tagged segment
const tagStr = match[1].toLowerCase().trim();
const text = match[2];
const tags = tagStr.split(/\s+/);
const segment: Segment = { text };
for (const tag of tags) {
if (tag.startsWith("bg:")) {
const color = tag.slice(3);
if (BACKGROUND_COLOR[color]) {
segment.bgColor = BACKGROUND_COLOR[color];
}
} else if (tag === "bold") {
segment.bold = true;
} else if (TEXT_COLOR[tag]) {
segment.textColor = TEXT_COLOR[tag];
}
}
if (text) {
segments.push(segment);
}
}
}
return segments;
}
/**
* Update a text block with colored segments.
*/
export async function updateColorText(
client: Lark.Client,
docToken: string,
blockId: string,
content: string,
) {
const segments = parseColorMarkup(content);
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- SDK type
const elements: any[] = segments.map((seg) => ({
text_run: {
content: seg.text,
text_element_style: {
...(seg.textColor && { text_color: seg.textColor }),
...(seg.bgColor && { background_color: seg.bgColor }),
...(seg.bold && { bold: true }),
},
},
}));
const res = await client.docx.documentBlock.patch({
path: { document_id: docToken, block_id: blockId },
data: { update_text_elements: { elements } },
});
if (res.code !== 0) {
throw new Error(res.msg);
}
return {
success: true,
segments: segments.length,
block: res.data?.block,
};
}

View File

@@ -0,0 +1,298 @@
/**
* Table utilities and row/column manipulation operations for Feishu documents.
*
* Combines:
* - Adaptive column width calculation (content-proportional, CJK-aware)
* - Block cleaning for Descendant API (removes read-only fields)
* - Table row/column insert, delete, and merge operations
*/
import type * as Lark from "@larksuiteoapi/node-sdk";
// ============ Table Utilities ============
// Feishu table constraints
const MIN_COLUMN_WIDTH = 50; // Feishu API minimum
const MAX_COLUMN_WIDTH = 400; // Reasonable maximum for readability
const DEFAULT_TABLE_WIDTH = 730; // Approximate Feishu page content width
/**
* Calculate adaptive column widths based on cell content length.
*
* Algorithm:
* 1. For each column, find the max content length across all rows
* 2. Weight CJK characters as 2x width (they render wider)
* 3. Calculate proportional widths based on content length
* 4. Apply min/max constraints
* 5. Redistribute remaining space to fill total table width
*
* Total width is derived from the original column_width values returned
* by the Convert API, ensuring tables match Feishu's expected dimensions.
*
* @param blocks - Array of blocks from Convert API
* @param tableBlockId - The block_id of the table block
* @returns Array of column widths in pixels
*/
export function calculateAdaptiveColumnWidths(
// eslint-disable-next-line @typescript-eslint/no-explicit-any
blocks: any[],
tableBlockId: string,
): number[] {
// Find the table block
const tableBlock = blocks.find((b) => b.block_id === tableBlockId && b.block_type === 31);
if (!tableBlock?.table?.property) {
return [];
}
const { row_size, column_size, column_width: originalWidths } = tableBlock.table.property;
// Use original total width from Convert API, or fall back to default
const totalWidth =
originalWidths && originalWidths.length > 0
? originalWidths.reduce((a: number, b: number) => a + b, 0)
: DEFAULT_TABLE_WIDTH;
const cellIds: string[] = tableBlock.children || [];
// Build block lookup map
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const blockMap = new Map<string, any>();
for (const block of blocks) {
blockMap.set(block.block_id, block);
}
// Extract text content from a table cell
function getCellText(cellId: string): string {
const cell = blockMap.get(cellId);
if (!cell?.children) return "";
let text = "";
const childIds = Array.isArray(cell.children) ? cell.children : [cell.children];
for (const childId of childIds) {
const child = blockMap.get(childId);
if (child?.text?.elements) {
for (const elem of child.text.elements) {
if (elem.text_run?.content) {
text += elem.text_run.content;
}
}
}
}
return text;
}
// Calculate weighted length (CJK chars count as 2)
// CJK (Chinese/Japanese/Korean) characters render ~2x wider than ASCII
function getWeightedLength(text: string): number {
return [...text].reduce((sum, char) => {
return sum + (char.charCodeAt(0) > 255 ? 2 : 1);
}, 0);
}
// Find max content length per column
const maxLengths: number[] = new Array(column_size).fill(0);
for (let row = 0; row < row_size; row++) {
for (let col = 0; col < column_size; col++) {
const cellIndex = row * column_size + col;
const cellId = cellIds[cellIndex];
if (cellId) {
const content = getCellText(cellId);
const length = getWeightedLength(content);
maxLengths[col] = Math.max(maxLengths[col], length);
}
}
}
// Handle empty table: distribute width equally, clamped to [MIN, MAX] so
// wide tables (e.g. 15+ columns) don't produce sub-50 widths that Feishu
// rejects as invalid column_width values.
const totalLength = maxLengths.reduce((a, b) => a + b, 0);
if (totalLength === 0) {
const equalWidth = Math.max(
MIN_COLUMN_WIDTH,
Math.min(MAX_COLUMN_WIDTH, Math.floor(totalWidth / column_size)),
);
return new Array(column_size).fill(equalWidth);
}
// Calculate proportional widths
let widths = maxLengths.map((len) => {
const proportion = len / totalLength;
return Math.round(proportion * totalWidth);
});
// Apply min/max constraints
widths = widths.map((w) => Math.max(MIN_COLUMN_WIDTH, Math.min(MAX_COLUMN_WIDTH, w)));
// Redistribute remaining space to fill total width
let remaining = totalWidth - widths.reduce((a, b) => a + b, 0);
while (remaining > 0) {
// Find columns that can still grow (not at max)
const growable = widths.map((w, i) => (w < MAX_COLUMN_WIDTH ? i : -1)).filter((i) => i >= 0);
if (growable.length === 0) break;
// Distribute evenly among growable columns
const perColumn = Math.floor(remaining / growable.length);
if (perColumn === 0) break;
for (const i of growable) {
const add = Math.min(perColumn, MAX_COLUMN_WIDTH - widths[i]);
widths[i] += add;
remaining -= add;
}
}
return widths;
}
/**
* Clean blocks for Descendant API with adaptive column widths.
*
* - Removes parent_id from all blocks
* - Fixes children type (string → array) for TableCell blocks
* - Removes merge_info (read-only, causes API error)
* - Calculates and applies adaptive column_width for tables
*
* @param blocks - Array of blocks from Convert API
* @returns Cleaned blocks ready for Descendant API
*/
// eslint-disable-next-line @typescript-eslint/no-explicit-any
export function cleanBlocksForDescendant(blocks: any[]): any[] {
// Pre-calculate adaptive widths for all tables
const tableWidths = new Map<string, number[]>();
for (const block of blocks) {
if (block.block_type === 31) {
const widths = calculateAdaptiveColumnWidths(blocks, block.block_id);
tableWidths.set(block.block_id, widths);
}
}
return blocks.map((block) => {
// eslint-disable-next-line @typescript-eslint/no-unused-vars
const { parent_id: _parentId, ...cleanBlock } = block;
// Fix: Convert API sometimes returns children as string for TableCell
if (cleanBlock.block_type === 32 && typeof cleanBlock.children === "string") {
cleanBlock.children = [cleanBlock.children];
}
// Clean table blocks
if (cleanBlock.block_type === 31 && cleanBlock.table) {
// eslint-disable-next-line @typescript-eslint/no-unused-vars
const { cells: _cells, ...tableWithoutCells } = cleanBlock.table;
const { row_size, column_size } = tableWithoutCells.property || {};
const adaptiveWidths = tableWidths.get(block.block_id);
cleanBlock.table = {
property: {
row_size,
column_size,
...(adaptiveWidths?.length && { column_width: adaptiveWidths }),
},
};
}
return cleanBlock;
});
}
// ============ Table Row/Column Operations ============
export async function insertTableRow(
client: Lark.Client,
docToken: string,
blockId: string,
rowIndex: number = -1,
) {
const res = await client.docx.documentBlock.patch({
path: { document_id: docToken, block_id: blockId },
data: { insert_table_row: { row_index: rowIndex } },
});
if (res.code !== 0) {
throw new Error(res.msg);
}
return { success: true, block: res.data?.block };
}
export async function insertTableColumn(
client: Lark.Client,
docToken: string,
blockId: string,
columnIndex: number = -1,
) {
const res = await client.docx.documentBlock.patch({
path: { document_id: docToken, block_id: blockId },
data: { insert_table_column: { column_index: columnIndex } },
});
if (res.code !== 0) {
throw new Error(res.msg);
}
return { success: true, block: res.data?.block };
}
export async function deleteTableRows(
client: Lark.Client,
docToken: string,
blockId: string,
rowStart: number,
rowCount: number = 1,
) {
const res = await client.docx.documentBlock.patch({
path: { document_id: docToken, block_id: blockId },
data: { delete_table_rows: { row_start_index: rowStart, row_end_index: rowStart + rowCount } },
});
if (res.code !== 0) {
throw new Error(res.msg);
}
return { success: true, rows_deleted: rowCount, block: res.data?.block };
}
export async function deleteTableColumns(
client: Lark.Client,
docToken: string,
blockId: string,
columnStart: number,
columnCount: number = 1,
) {
const res = await client.docx.documentBlock.patch({
path: { document_id: docToken, block_id: blockId },
data: {
delete_table_columns: {
column_start_index: columnStart,
column_end_index: columnStart + columnCount,
},
},
});
if (res.code !== 0) {
throw new Error(res.msg);
}
return { success: true, columns_deleted: columnCount, block: res.data?.block };
}
export async function mergeTableCells(
client: Lark.Client,
docToken: string,
blockId: string,
rowStart: number,
rowEnd: number,
columnStart: number,
columnEnd: number,
) {
const res = await client.docx.documentBlock.patch({
path: { document_id: docToken, block_id: blockId },
data: {
merge_table_cells: {
row_start_index: rowStart,
row_end_index: rowEnd,
column_start_index: columnStart,
column_end_index: columnEnd,
},
},
});
if (res.code !== 0) {
throw new Error(res.msg);
}
return { success: true, block: res.data?.block };
}

View File

@@ -29,6 +29,7 @@ describe("feishu_doc image fetch hardening", () => {
const blockChildrenCreateMock = vi.hoisted(() => vi.fn());
const blockChildrenGetMock = vi.hoisted(() => vi.fn());
const blockChildrenBatchDeleteMock = vi.hoisted(() => vi.fn());
const blockDescendantCreateMock = vi.hoisted(() => vi.fn());
const driveUploadAllMock = vi.hoisted(() => vi.fn());
const permissionMemberCreateMock = vi.hoisted(() => vi.fn());
const blockPatchMock = vi.hoisted(() => vi.fn());
@@ -52,6 +53,9 @@ describe("feishu_doc image fetch hardening", () => {
get: blockChildrenGetMock,
batchDelete: blockChildrenBatchDeleteMock,
},
documentBlockDescendant: {
create: blockDescendantCreateMock,
},
},
drive: {
media: {
@@ -95,6 +99,11 @@ describe("feishu_doc image fetch hardening", () => {
data: { items: [{ block_id: "placeholder_block_1" }] },
});
blockChildrenBatchDeleteMock.mockResolvedValue({ code: 0 });
// write/append use Descendant API; return image block so processImages runs
blockDescendantCreateMock.mockResolvedValue({
code: 0,
data: { children: [{ block_type: 27, block_id: "img_block_1" }] },
});
driveUploadAllMock.mockResolvedValue({ file_token: "token_1" });
documentCreateMock.mockResolvedValue({
code: 0,
@@ -121,11 +130,10 @@ describe("feishu_doc image fetch hardening", () => {
blockListMock.mockResolvedValue({ code: 0, data: { items: [] } });
// Each call returns the single block that was passed in
blockChildrenCreateMock
.mockResolvedValueOnce({ code: 0, data: { children: [{ block_type: 3, block_id: "h1" }] } })
.mockResolvedValueOnce({ code: 0, data: { children: [{ block_type: 2, block_id: "t1" }] } })
.mockResolvedValueOnce({ code: 0, data: { children: [{ block_type: 3, block_id: "h2" }] } });
blockDescendantCreateMock.mockResolvedValueOnce({
code: 0,
data: { children: [{ block_type: 3, block_id: "h1" }] },
});
const registerTool = vi.fn();
registerFeishuDocTools({
@@ -150,15 +158,11 @@ describe("feishu_doc image fetch hardening", () => {
content: "plain text body",
});
// Verify sequential insertion: one call per block
expect(blockChildrenCreateMock).toHaveBeenCalledTimes(3);
// Verify each call received exactly one block in the correct order
const calls = blockChildrenCreateMock.mock.calls;
expect(calls[0][0].data.children).toHaveLength(1);
expect(calls[0][0].data.children[0].block_id).toBe("h1");
expect(calls[1][0].data.children[0].block_id).toBe("t1");
expect(calls[2][0].data.children[0].block_id).toBe("h2");
expect(blockDescendantCreateMock).toHaveBeenCalledTimes(1);
const call = blockDescendantCreateMock.mock.calls[0]?.[0];
expect(call?.data.children_id).toEqual(["h1", "t1", "h2"]);
expect(call?.data.descendants).toBeDefined();
expect(call?.data.descendants.length).toBeGreaterThanOrEqual(3);
expect(result.details.blocks_added).toBe(3);
});
@@ -181,9 +185,13 @@ describe("feishu_doc image fetch hardening", () => {
};
});
blockChildrenCreateMock.mockImplementation(async ({ data }) => ({
blockDescendantCreateMock.mockImplementation(async ({ data }) => ({
code: 0,
data: { children: data.children },
data: {
children: (data.children_id as string[]).map((id) => ({
block_id: id,
})),
},
}));
const registerTool = vi.fn();

View File

@@ -1,11 +1,22 @@
import { promises as fs } from "node:fs";
import { existsSync, promises as fs } from "node:fs";
import { homedir } from "node:os";
import { isAbsolute } from "node:path";
import { basename } from "node:path";
import { Readable } from "stream";
import type * as Lark from "@larksuiteoapi/node-sdk";
import { Type } from "@sinclair/typebox";
import type { OpenClawPluginApi } from "openclaw/plugin-sdk";
import { listEnabledFeishuAccounts } from "./accounts.js";
import { FeishuDocSchema, type FeishuDocParams } from "./doc-schema.js";
import { BATCH_SIZE, insertBlocksInBatches } from "./docx-batch-insert.js";
import { updateColorText } from "./docx-color-text.js";
import {
cleanBlocksForDescendant,
insertTableRow,
insertTableColumn,
deleteTableRows,
deleteTableColumns,
mergeTableCells,
} from "./docx-table-ops.js";
import { getFeishuRuntime } from "./runtime.js";
import {
createFeishuToolClient,
@@ -248,12 +259,14 @@ async function chunkedConvertMarkdown(client: Lark.Client, markdown: string) {
const chunks = splitMarkdownByHeadings(markdown);
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- SDK block types
const allBlocks: any[] = [];
const allFirstLevelBlockIds: string[] = [];
for (const chunk of chunks) {
const { blocks, firstLevelBlockIds } = await convertMarkdownWithFallback(client, chunk);
const sorted = sortBlocksByFirstLevel(blocks, firstLevelBlockIds);
allBlocks.push(...sorted);
allFirstLevelBlockIds.push(...firstLevelBlockIds);
}
return allBlocks;
return { blocks: allBlocks, firstLevelBlockIds: allFirstLevelBlockIds };
}
/** Insert blocks in batches of MAX_BLOCKS_PER_INSERT to avoid API 400 errors */
@@ -279,6 +292,41 @@ async function chunkedInsertBlocks(
return { children: allChildren, skipped: allSkipped };
}
type Logger = { info?: (msg: string) => void };
/**
* Insert blocks using the Descendant API (supports tables, nested lists, large docs).
* Unlike the Children API, this supports block_type 31/32 (Table/TableCell).
*
* @param parentBlockId - Parent block to insert into (defaults to docToken = document root)
* @param index - Position within parent's children (-1 = end, 0 = first)
*/
/* eslint-disable @typescript-eslint/no-explicit-any -- SDK block types */
async function insertBlocksWithDescendant(
client: Lark.Client,
docToken: string,
blocks: any[],
firstLevelBlockIds: string[],
{ parentBlockId = docToken, index = -1 }: { parentBlockId?: string; index?: number } = {},
): Promise<{ children: any[] }> {
/* eslint-enable @typescript-eslint/no-explicit-any */
const descendants = cleanBlocksForDescendant(blocks);
if (descendants.length === 0) {
return { children: [] };
}
const res = await client.docx.documentBlockDescendant.create({
path: { document_id: docToken, block_id: parentBlockId },
data: { children_id: firstLevelBlockIds, descendants, index },
});
if (res.code !== 0) {
throw new Error(`${res.msg} (code: ${res.code})`);
}
return { children: res.data?.children ?? [] };
}
async function clearDocumentContent(client: Lark.Client, docToken: string) {
const existing = await client.docx.documentBlock.list({
path: { document_id: docToken },
@@ -310,6 +358,7 @@ async function uploadImageToDocx(
blockId: string,
imageBuffer: Buffer,
fileName: string,
docToken?: string,
): Promise<string> {
const res = await client.drive.media.uploadAll({
data: {
@@ -317,8 +366,15 @@ async function uploadImageToDocx(
parent_type: "docx_image",
parent_node: blockId,
size: imageBuffer.length,
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- SDK stream type
file: Readable.from(imageBuffer) as any,
// Pass Buffer directly so form-data can calculate Content-Length correctly.
// Readable.from() produces a stream with unknown length, causing Content-Length
// mismatch that silently truncates uploads for images larger than ~1KB.
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- SDK file type
file: imageBuffer as any,
// Required when the document block belongs to a non-default datacenter:
// tells the drive service which document the block belongs to for routing.
// Per API docs: certain upload scenarios require the cloud document token.
...(docToken ? { extra: JSON.stringify({ drive_route_token: docToken }) } : {}),
},
});
@@ -339,9 +395,112 @@ async function resolveUploadInput(
filePath: string | undefined,
maxBytes: number,
explicitFileName?: string,
imageInput?: string, // data URI, plain base64, or local path
): Promise<{ buffer: Buffer; fileName: string }> {
// Enforce mutual exclusivity: exactly one input source must be provided.
const inputSources = (
[url ? "url" : null, filePath ? "file_path" : null, imageInput ? "image" : null] as (
| string
| null
)[]
).filter(Boolean);
if (inputSources.length > 1) {
throw new Error(`Provide only one image source; got: ${inputSources.join(", ")}`);
}
// data URI: data:image/png;base64,xxxx
if (imageInput?.startsWith("data:")) {
const commaIdx = imageInput.indexOf(",");
if (commaIdx === -1) {
throw new Error("Invalid data URI: missing comma separator.");
}
const header = imageInput.slice(0, commaIdx);
const data = imageInput.slice(commaIdx + 1);
// Only base64-encoded data URIs are supported; reject plain/URL-encoded ones.
if (!header.includes(";base64")) {
throw new Error(
`Invalid data URI: missing ';base64' marker. ` +
`Expected format: data:image/png;base64,<base64data>`,
);
}
// Validate the payload is actually base64 before decoding; Node's decoder
// is permissive and would silently accept garbage bytes otherwise.
const trimmedData = data.trim();
if (trimmedData.length === 0 || !/^[A-Za-z0-9+/]+=*$/.test(trimmedData)) {
throw new Error(
`Invalid data URI: base64 payload contains characters outside the standard alphabet.`,
);
}
const mimeMatch = header.match(/data:([^;]+)/);
const ext = mimeMatch?.[1]?.split("/")[1] ?? "png";
// Estimate decoded byte count from base64 length BEFORE allocating the
// full buffer to avoid spiking memory on oversized payloads.
const estimatedBytes = Math.ceil((trimmedData.length * 3) / 4);
if (estimatedBytes > maxBytes) {
throw new Error(
`Image data URI exceeds limit: estimated ${estimatedBytes} bytes > ${maxBytes} bytes`,
);
}
const buffer = Buffer.from(trimmedData, "base64");
return { buffer, fileName: explicitFileName ?? `image.${ext}` };
}
// local path: ~, ./ and ../ are unambiguous (not in base64 alphabet).
// Absolute paths (/...) are supported but must exist on disk. If an absolute
// path does not exist we throw immediately rather than falling through to
// base64 decoding, which would silently upload garbage bytes.
// Note: JPEG base64 starts with "/9j/" — pass as data:image/jpeg;base64,...
// to avoid ambiguity with absolute paths.
if (imageInput) {
const candidate = imageInput.startsWith("~") ? imageInput.replace(/^~/, homedir()) : imageInput;
const unambiguousPath =
imageInput.startsWith("~") || imageInput.startsWith("./") || imageInput.startsWith("../");
const absolutePath = isAbsolute(imageInput);
if (unambiguousPath || (absolutePath && existsSync(candidate))) {
const buffer = await fs.readFile(candidate);
if (buffer.length > maxBytes) {
throw new Error(`Local file exceeds limit: ${buffer.length} bytes > ${maxBytes} bytes`);
}
return { buffer, fileName: explicitFileName ?? basename(candidate) };
}
if (absolutePath && !existsSync(candidate)) {
throw new Error(
`File not found: "${candidate}". ` +
`If you intended to pass image binary data, use a data URI instead: data:image/jpeg;base64,...`,
);
}
}
// plain base64 string (standard base64 alphabet includes '+', '/', '=')
if (imageInput) {
const trimmed = imageInput.trim();
// Node's Buffer.from is permissive and silently ignores out-of-alphabet chars,
// which would decode malformed strings into arbitrary bytes. Reject early.
if (trimmed.length === 0 || !/^[A-Za-z0-9+/]+=*$/.test(trimmed)) {
throw new Error(
`Invalid base64: image input contains characters outside the standard base64 alphabet. ` +
`Use a data URI (data:image/png;base64,...) or a local file path instead.`,
);
}
// Estimate decoded byte count from base64 length BEFORE allocating the
// full buffer to avoid spiking memory on oversized payloads.
const estimatedBytes = Math.ceil((trimmed.length * 3) / 4);
if (estimatedBytes > maxBytes) {
throw new Error(
`Base64 image exceeds limit: estimated ${estimatedBytes} bytes > ${maxBytes} bytes`,
);
}
const buffer = Buffer.from(trimmed, "base64");
if (buffer.length === 0) {
throw new Error("Base64 image decoded to empty buffer; check the input.");
}
return { buffer, fileName: explicitFileName ?? "image.png" };
}
if (!url && !filePath) {
throw new Error("Either url or file_path is required");
throw new Error("Either url, file_path, or image (base64/data URI) must be provided");
}
if (url && filePath) {
throw new Error("Provide only one of url or file_path");
@@ -392,7 +551,7 @@ async function processImages(
const buffer = await downloadImage(url, maxBytes);
const urlPath = new URL(url).pathname;
const fileName = urlPath.split("/").pop() || `image_${i}.png`;
const fileToken = await uploadImageToDocx(client, blockId, buffer, fileName);
const fileToken = await uploadImageToDocx(client, blockId, buffer, fileName, docToken);
await client.docx.documentBlock.patch({
path: { document_id: docToken, block_id: blockId },
@@ -419,32 +578,39 @@ async function uploadImageBlock(
parentBlockId?: string,
filename?: string,
index?: number,
imageInput?: string, // data URI, plain base64, or local path
) {
const blockId = parentBlockId ?? docToken;
// Feishu API does not allow creating empty image blocks (block_type 27).
// Workaround: use markdown conversion to create a placeholder image block,
// then upload the real image and patch the block.
const placeholderMd = "![img](https://via.placeholder.com/800x600.png)";
const converted = await convertMarkdown(client, placeholderMd);
const sorted = sortBlocksByFirstLevel(converted.blocks, converted.firstLevelBlockIds);
const { children: inserted } = await insertBlocks(client, docToken, sorted, blockId, index);
// Step 1: Create an empty image block (block_type 27).
// Per Feishu FAQ: image token cannot be set at block creation time.
const insertRes = await client.docx.documentBlockChildren.create({
path: { document_id: docToken, block_id: parentBlockId ?? docToken },
params: { document_revision_id: -1 },
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- SDK type
data: { children: [{ block_type: 27, image: {} as any }], index: index ?? -1 },
});
if (insertRes.code !== 0) {
throw new Error(`Failed to create image block: ${insertRes.msg}`);
}
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- SDK return shape
const imageBlock = inserted.find((b: any) => b.block_type === 27);
const imageBlockId = imageBlock?.block_id;
const imageBlockId = insertRes.data?.children?.find((b: any) => b.block_type === 27)?.block_id;
if (!imageBlockId) {
throw new Error("Failed to create image block via markdown placeholder");
throw new Error("Failed to create image block");
}
const upload = await resolveUploadInput(url, filePath, maxBytes, filename);
const fileToken = await uploadImageToDocx(client, imageBlockId, upload.buffer, upload.fileName);
// Step 2: Resolve and upload the image buffer.
const upload = await resolveUploadInput(url, filePath, maxBytes, filename, imageInput);
const fileToken = await uploadImageToDocx(
client,
imageBlockId,
upload.buffer,
upload.fileName,
docToken, // drive_route_token for multi-datacenter routing
);
// Step 3: Set the image token on the block.
const patchRes = await client.docx.documentBlock.patch({
path: { document_id: docToken, block_id: imageBlockId },
data: {
replace_image: { token: fileToken },
},
data: { replace_image: { token: fileToken } },
});
if (patchRes.code !== 0) {
throw new Error(patchRes.msg);
@@ -518,8 +684,8 @@ async function uploadFileBlock(
parent_type: "docx_file",
parent_node: docToken,
size: upload.buffer.length,
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- SDK stream type
file: Readable.from(upload.buffer) as any,
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- SDK file type
file: upload.buffer as any,
},
});
@@ -632,25 +798,34 @@ async function createDoc(
};
}
async function writeDoc(client: Lark.Client, docToken: string, markdown: string, maxBytes: number) {
async function writeDoc(
client: Lark.Client,
docToken: string,
markdown: string,
maxBytes: number,
logger?: Logger,
) {
const deleted = await clearDocumentContent(client, docToken);
const blocks = await chunkedConvertMarkdown(client, markdown);
logger?.info?.("feishu_doc: Converting markdown...");
const { blocks, firstLevelBlockIds } = await chunkedConvertMarkdown(client, markdown);
if (blocks.length === 0) {
return { success: true, blocks_deleted: deleted, blocks_added: 0, images_processed: 0 };
}
const { children: inserted, skipped } = await chunkedInsertBlocks(client, docToken, blocks);
logger?.info?.(`feishu_doc: Converted to ${blocks.length} blocks, inserting...`);
const sortedBlocks = sortBlocksByFirstLevel(blocks, firstLevelBlockIds);
const { children: inserted } =
blocks.length > BATCH_SIZE
? await insertBlocksInBatches(client, docToken, sortedBlocks, firstLevelBlockIds, logger)
: await insertBlocksWithDescendant(client, docToken, sortedBlocks, firstLevelBlockIds);
const imagesProcessed = await processImages(client, docToken, markdown, inserted, maxBytes);
logger?.info?.(`feishu_doc: Done (${blocks.length} blocks, ${imagesProcessed} images)`);
return {
success: true,
blocks_deleted: deleted,
blocks_added: inserted.length,
blocks_added: blocks.length,
images_processed: imagesProcessed,
...(skipped.length > 0 && {
warning: `Skipped unsupported block types: ${skipped.join(", ")}. Tables are not supported via this API.`,
}),
};
}
@@ -659,24 +834,105 @@ async function appendDoc(
docToken: string,
markdown: string,
maxBytes: number,
logger?: Logger,
) {
const blocks = await chunkedConvertMarkdown(client, markdown);
logger?.info?.("feishu_doc: Converting markdown...");
const { blocks, firstLevelBlockIds } = await chunkedConvertMarkdown(client, markdown);
if (blocks.length === 0) {
throw new Error("Content is empty");
}
const { children: inserted, skipped } = await chunkedInsertBlocks(client, docToken, blocks);
logger?.info?.(`feishu_doc: Converted to ${blocks.length} blocks, inserting...`);
const sortedBlocks = sortBlocksByFirstLevel(blocks, firstLevelBlockIds);
const { children: inserted } =
blocks.length > BATCH_SIZE
? await insertBlocksInBatches(client, docToken, sortedBlocks, firstLevelBlockIds, logger)
: await insertBlocksWithDescendant(client, docToken, sortedBlocks, firstLevelBlockIds);
const imagesProcessed = await processImages(client, docToken, markdown, inserted, maxBytes);
logger?.info?.(`feishu_doc: Done (${blocks.length} blocks, ${imagesProcessed} images)`);
return {
success: true,
blocks_added: inserted.length,
blocks_added: blocks.length,
images_processed: imagesProcessed,
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- SDK block type
block_ids: inserted.map((b: any) => b.block_id),
};
}
async function insertDoc(
client: Lark.Client,
docToken: string,
markdown: string,
afterBlockId: string,
maxBytes: number,
logger?: Logger,
) {
const blockInfo = await client.docx.documentBlock.get({
path: { document_id: docToken, block_id: afterBlockId },
});
if (blockInfo.code !== 0) throw new Error(blockInfo.msg);
const parentId = blockInfo.data?.block?.parent_id ?? docToken;
// Paginate through all children to reliably locate after_block_id.
// documentBlockChildren.get returns up to 200 children per page; large
// parents require multiple requests.
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- SDK block type
const items: any[] = [];
let pageToken: string | undefined;
do {
const childrenRes = await client.docx.documentBlockChildren.get({
path: { document_id: docToken, block_id: parentId },
params: pageToken ? { page_token: pageToken } : {},
});
if (childrenRes.code !== 0) throw new Error(childrenRes.msg);
items.push(...(childrenRes.data?.items ?? []));
pageToken = childrenRes.data?.page_token ?? undefined;
} while (pageToken);
const blockIndex = items.findIndex((item) => item.block_id === afterBlockId);
if (blockIndex === -1) {
throw new Error(
`after_block_id "${afterBlockId}" was not found among the children of parent block "${parentId}". ` +
`Use list_blocks to verify the block ID.`,
);
}
const insertIndex = blockIndex + 1;
logger?.info?.("feishu_doc: Converting markdown...");
const { blocks, firstLevelBlockIds } = await chunkedConvertMarkdown(client, markdown);
if (blocks.length === 0) throw new Error("Content is empty");
const sortedBlocks = sortBlocksByFirstLevel(blocks, firstLevelBlockIds);
logger?.info?.(
`feishu_doc: Converted to ${blocks.length} blocks, inserting at index ${insertIndex}...`,
);
const { children: inserted } =
blocks.length > BATCH_SIZE
? await insertBlocksInBatches(
client,
docToken,
sortedBlocks,
firstLevelBlockIds,
logger,
parentId,
insertIndex,
)
: await insertBlocksWithDescendant(client, docToken, sortedBlocks, firstLevelBlockIds, {
parentBlockId: parentId,
index: insertIndex,
});
const imagesProcessed = await processImages(client, docToken, markdown, inserted, maxBytes);
logger?.info?.(`feishu_doc: Done (${blocks.length} blocks, ${imagesProcessed} images)`);
return {
success: true,
blocks_added: blocks.length,
images_processed: imagesProcessed,
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- SDK block type
block_ids: inserted.map((b: any) => b.block_id),
...(skipped.length > 0 && {
warning: `Skipped unsupported block types: ${skipped.join(", ")}. Tables are not supported via this API.`,
}),
};
}
@@ -999,7 +1255,7 @@ export function registerFeishuDocTools(api: OpenClawPluginApi) {
name: "feishu_doc",
label: "Feishu Doc",
description:
"Feishu document operations. Actions: read, write, append, create, list_blocks, get_block, update_block, delete_block, create_table, write_table_cells, create_table_with_values, upload_image, upload_file",
"Feishu document operations. Actions: read, write, append, insert, create, list_blocks, get_block, update_block, delete_block, create_table, write_table_cells, create_table_with_values, insert_table_row, insert_table_column, delete_table_rows, delete_table_columns, merge_table_cells, upload_image, upload_file, color_text",
parameters: FeishuDocSchema,
async execute(_toolCallId, params) {
const p = params as FeishuDocExecuteParams;
@@ -1015,6 +1271,7 @@ export function registerFeishuDocTools(api: OpenClawPluginApi) {
p.doc_token,
p.content,
getMediaMaxBytes(p, defaultAccountId),
api.logger,
),
);
case "append":
@@ -1024,6 +1281,18 @@ export function registerFeishuDocTools(api: OpenClawPluginApi) {
p.doc_token,
p.content,
getMediaMaxBytes(p, defaultAccountId),
api.logger,
),
);
case "insert":
return json(
await insertDoc(
client,
p.doc_token,
p.content,
p.after_block_id,
getMediaMaxBytes(p, defaultAccountId),
api.logger,
),
);
case "create":
@@ -1082,6 +1351,7 @@ export function registerFeishuDocTools(api: OpenClawPluginApi) {
p.parent_block_id,
p.filename,
p.index,
p.image, // data URI or plain base64
),
);
case "upload_file":
@@ -1096,6 +1366,46 @@ export function registerFeishuDocTools(api: OpenClawPluginApi) {
p.filename,
),
);
case "color_text":
return json(await updateColorText(client, p.doc_token, p.block_id, p.content));
case "insert_table_row":
return json(await insertTableRow(client, p.doc_token, p.block_id, p.row_index));
case "insert_table_column":
return json(
await insertTableColumn(client, p.doc_token, p.block_id, p.column_index),
);
case "delete_table_rows":
return json(
await deleteTableRows(
client,
p.doc_token,
p.block_id,
p.row_start,
p.row_count,
),
);
case "delete_table_columns":
return json(
await deleteTableColumns(
client,
p.doc_token,
p.block_id,
p.column_start,
p.column_count,
),
);
case "merge_table_cells":
return json(
await mergeTableCells(
client,
p.doc_token,
p.block_id,
p.row_start,
p.row_end,
p.column_start,
p.column_end,
),
);
default:
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- exhaustive check fallback
return json({ error: `Unknown action: ${(p as any).action}` });