Cross-Tool Automation Readiness Audit

Category analysis
Subcategory automation-readiness
Difficulty intermediate
Target models: claude-opus, gpt, gemini-flash
Variables: {{preferred_llm}} {{workflow_snapshot}} {{systems_to_connect}} {{quality_requirements}} {{failure_modes}} {{approval_threshold}} {{measurement_plan}}
automation-audit connector-health workflow-maturity tooling operating-model
Updated March 5, 2026

The Prompt

You are a reliability-first AI workflow auditor. Evaluate whether a process is ready for cross-tool automation and produce an implementation-first readiness package.

PREFERRED LLM / FAMILY:
{{preferred_llm}}

WORKFLOW SNAPSHOT:
{{workflow_snapshot}}

SYSTEMS TO CONNECT:
{{systems_to_connect}}

QUALITY REQUIREMENTS:
{{quality_requirements}}

KNOWN FAILURE MODES:
{{failure_modes}}

APPROVAL THRESHOLD:
{{approval_threshold}}

MEASUREMENT PLAN:
{{measurement_plan}}

Return exactly these sections:

1) Current-State Maturity Score (0-100)
- Process coverage
- Data quality
- Connector reliability
- Governance readiness

2) Risk Register
- Top 8 risk categories with likelihood, impact, mitigation, and owner.

3) Sequenced Automation Opportunities
- For each opportunity: value, complexity, risk, required controls, and prerequisite.
- Split into "pilot", "expand", "defer".

4) Connector Readiness Blueprint
- Read-only first integrations.
- Write-capable integrations and the approval surface.
- Retry and poison-pill strategy.

5) Governance Controls
- Policy checks
- Access controls
- Audit and rollback requirements

6) 30-Day Rollout Plan
- Week-by-week execution with stop conditions and success checkpoints.

Rules:
- Avoid hard assumptions: explicitly call unknowns.
- Do not include direct execution instructions.
- Only propose writes that can be governed by policy and role-based approvals.

When to Use

Use this when evaluating whether a team can safely automate coordinated workflows without overcommitting to fragile integrations. It helps identify which tasks should be automated now, delayed, or kept manual until more telemetry/control exists.

Variables

VariableDescriptionExample
preferred_llmLLM family used for planning and synthesisclaude-opus, gpt, gemini
workflow_snapshotCurrent process narrative”Manual weekly support triage across Slack, Notion, and Jira”
systems_to_connectSystems and expected connectors”Notion, Jira, Slack, Google Workspace, email”
quality_requirementsRequired accuracy and uptime conditions”No missed escalations; 95% evidence traceability”
failure_modesKnown weak points”API rate limits, missing fields, delayed approvals”
approval_thresholdWhat requires human signoff”Any action changing priority, owner, or billing state”
measurement_planMetrics for go/no-go”triage latency, action acceptance rate, stale updates”

Tips & Variations

  • Run a “pilot scope” version with only read-only connectors first.
  • Add a dedicated “manual override” path for any exception class.
  • Ask the model to output a dependency graph where each integration edge has a single owner.
  • Include a “de-risk if” section with explicit triggers for delaying rollout.

Example Output

  • Maturity score: 62/100 with strong planning quality but weak connector governance.
  • Pilot automation: triage draft generation + Slack draft updates.
  • Expand candidates: ticket routing, status aggregation, follow-up reminders with approval gates.
  • Deferred items: high-risk writebacks until idempotency and rollback checks are validated.