Archive milestone artifacts (roadmap, requirements, audit, phase directories) to .planning/milestones/. Evolve PROJECT.md with validated requirements and decision outcomes. Create MILESTONES.md and RETROSPECTIVE.md. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
9.2 KiB
phase, plan, type, wave, depends_on, files_modified, autonomous, requirements, must_haves
| phase | plan | type | wave | depends_on | files_modified | autonomous | requirements | must_haves | |||||||||||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 02-backend-services | 02 | execute | 1 |
|
true |
|
|
Purpose: HLTH-02 requires real authenticated API calls (not config checks), and HLTH-04 requires results to persist to Supabase. This plan builds the probe logic and persistence layer.
Output: healthProbeService.ts with 4 probers + runAllProbes orchestrator, and unit tests. Also installs nodemailer (needed by Plan 03).
<execution_context> @/home/jonathan/.claude/get-shit-done/workflows/execute-plan.md @/home/jonathan/.claude/get-shit-done/templates/summary.md </execution_context>
@.planning/PROJECT.md @.planning/ROADMAP.md @.planning/STATE.md @.planning/phases/02-backend-services/02-RESEARCH.md @.planning/phases/01-data-foundation/01-01-SUMMARY.md @backend/src/models/HealthCheckModel.ts @backend/src/config/supabase.ts @backend/src/services/documentAiProcessor.ts @backend/src/services/llmService.ts @backend/src/config/firebase.ts Task 1: Install nodemailer and create healthProbeService backend/package.json backend/src/services/healthProbeService.ts **Step 1: Install nodemailer** (needed by Plan 03, installing now to avoid package.json conflicts in parallel execution): ```bash cd backend && npm install nodemailer && npm install --save-dev @types/nodemailer ```Step 2: Create healthProbeService.ts with the following structure:
Export a ProbeResult interface:
export interface ProbeResult {
service_name: string;
status: 'healthy' | 'degraded' | 'down';
latency_ms: number;
error_message?: string;
probe_details?: Record<string, unknown>;
}
Create 4 individual probe functions (all private/unexported):
-
probeDocumentAI(): Import
DocumentProcessorServiceClientfrom@google-cloud/documentai. Callclient.listProcessors({ parent: ... })using the project ID from config. Latency > 2000ms = 'degraded'. Catch errors = 'down' with error_message. -
probeLLM(): Import
Anthropicfrom@anthropic-ai/sdk. Create client withprocess.env.ANTHROPIC_API_KEY. Callclient.messages.create({ model: 'claude-haiku-4-5', max_tokens: 5, messages: [{ role: 'user', content: 'Hi' }] }). Use cheapest model (PITFALL B prevention). Latency > 5000ms = 'degraded'. 429 errors = 'degraded' (rate limit, not down). Other errors = 'down'. -
probeSupabase(): Import
getPostgresPoolfrom'../config/supabase'. Callpool.query('SELECT 1'). Use direct PostgreSQL, NOT PostgREST (PITFALL C prevention). Latency > 2000ms = 'degraded'. Errors = 'down'. -
probeFirebaseAuth(): Import
adminfromfirebase-admin(or use the existing firebase config). Calladmin.auth().verifyIdToken('invalid-token-probe-check'). This ALWAYS throws. If error message contains 'argument' or 'INVALID' = 'healthy' (SDK is alive). Other errors = 'down'.
Create runAllProbes() as the orchestrator:
- Wrap each probe in individual try/catch (PITFALL E: one probe failure must not stop others)
- For each ProbeResult, call
HealthCheckModel.create({ service_name, status, latency_ms, error_message, probe_details, checked_at: new Date().toISOString() }) - Return array of all ProbeResults
- Log summary via Winston logger
Export as object: export const healthProbeService = { runAllProbes }.
Use Winston logger for all logging. Use getSupabaseServiceClient() per-method pattern for any Supabase calls (though probes use getPostgresPool() directly for the Supabase probe).
cd /home/jonathan/Coding/cim_summary/backend && npx tsc --noEmit --pretty 2>&1 | head -30
Verify healthProbeService.ts exists with runAllProbes and ProbeResult exports
nodemailer installed. healthProbeService.ts exports ProbeResult interface and healthProbeService object with runAllProbes(). Four probes make real API calls. Each probe wrapped in try/catch. Results persisted via HealthCheckModel.create(). TypeScript compiles.
Mock all external dependencies:
vi.mock('../../models/HealthCheckModel')— mockcreate()to resolve successfullyvi.mock('../../config/supabase')— mockgetPostgresPool()returning{ query: vi.fn() }vi.mock('@google-cloud/documentai')— mockDocumentProcessorServiceClientwithlistProcessorsresolvingvi.mock('@anthropic-ai/sdk')— mockAnthropicconstructor,messages.createresolvingvi.mock('firebase-admin')— mockauth().verifyIdToken()throwing expected errorvi.mock('../../utils/logger')— mock logger
Test cases for runAllProbes:
- All probes healthy — returns 4 ProbeResults with status 'healthy' — all mocks resolve quickly, verify 4 results returned with status 'healthy'
- Each result persisted via HealthCheckModel.create — verify
HealthCheckModel.createcalled 4 times with correct service_name values: 'document_ai', 'llm_api', 'supabase', 'firebase_auth' - One probe throws — others still run — make Document AI mock throw, verify 3 other probes still complete and all 4 HealthCheckModel.create calls happen (the failed probe creates a 'down' result)
- LLM probe 429 error returns 'degraded' not 'down' — make Anthropic mock throw error with '429' in message, verify result status is 'degraded'
- Supabase probe uses getPostgresPool not getSupabaseServiceClient — verify
getPostgresPoolwas called (not getSupabaseServiceClient) during Supabase probe - Firebase Auth probe — expected error = healthy — mock verifyIdToken throwing 'Decoding Firebase ID token failed' (argument error), verify status is 'healthy'
- Firebase Auth probe — unexpected error = down — mock verifyIdToken throwing network error, verify status is 'down'
- Latency measured correctly — use
vi.useFakeTimers()or verifylatency_msis a non-negative number
Use beforeEach(() => vi.clearAllMocks()).
cd /home/jonathan/Coding/cim_summary/backend && npx vitest run src/tests/unit/healthProbeService.test.ts --reporter=verbose 2>&1
All healthProbeService tests pass. Probes verified as making real API calls (mocked). Orchestrator verified as fault-tolerant (one probe failure doesn't stop others). Results verified as persisted via HealthCheckModel.create(). Supabase probe uses getPostgresPool, not PostgREST.
<success_criteria>
- nodemailer and @types/nodemailer installed in backend/package.json
- healthProbeService exports ProbeResult and healthProbeService.runAllProbes
- 4 probes: document_ai, llm_api, supabase, firebase_auth
- Each probe returns structured ProbeResult with status/latency_ms/error_message
- Probe results persisted via HealthCheckModel.create()
- Individual probe failures isolated (other probes still run)
- All unit tests pass </success_criteria>