How RiskReady keeps AI governed in production.
Six governance mechanisms, an 8-point security audit scoring 8.9/10, and full audit trails — all open-source, all auditable — that prevent AI from modifying your GRC data without human approval.
Implementation
Six mechanisms. All in the open-source repo.
These are not marketing claims. Each mechanism maps to a specific file or directory in the community repository that you can read, audit, and run yourself.
Read / write separation
MCP read tools execute instantly and return live data. Write tools never touch the database directly.
Every mutation call goes through createPendingAction() from the shared MCP package. The action is stored as PENDING in McpPendingAction until a human reviews it.
Anti-hallucination rules
Each MCP server's system prompt contains strict rules that forbid fabricating data.
If a tool returns empty results, the AI must report that outcome exactly. It cannot invent IDs, risk scores, control names, or any other values. Zero is a valid answer.
Human-in-the-loop enforcement
There is no auto-approve mode. Every AI mutation requires explicit human approval — interactive, scheduled, or autonomous.
The McpPendingAction model has a status field (PENDING → APPROVED/REJECTED). The executor only runs APPROVED actions. Rejected actions include reviewer notes for the AI feedback loop.
Workflow approval gates
Cross-domain workflows pause at every mutation boundary and resume automatically after approval.
The workflow executor tracks step progress. When a step produces a pending action, it saves state and exits. The scheduler detects approval events and resumes the workflow from the exact step.
Agent self-awareness
The AI can check the outcome of its own proposals instead of blindly retrying.
The agent-ops MCP server exposes check_action_status, list_pending_actions, and list_recent_actions. If a proposal was rejected, the AI reads the reviewer notes and adjusts.
Council deliberation
Complex cross-domain questions trigger a 6-agent council with human-visible reasoning.
Specialist agents (risk-analyst, controls-auditor, compliance-officer, incident-commander, evidence-auditor, CISO-strategist) each produce independent opinions. The orchestrator synthesises, but the individual reasoning is preserved for audit.
Mutation lifecycle
Every AI write follows the same path.
AI proposes
The agent calls a write tool on any MCP server. Instead of executing, it creates a McpPendingAction record with status PENDING.
Human reviews
The action appears in the approval queue UI. The reviewer sees what will change, which tool was called, and the full parameters.
Approve or reject
Approved actions are executed against the database. Rejected actions include reviewer notes that feed back into the AI's next attempt.
Workflow resumes
If the action was part of an autonomous workflow, the scheduler detects the approval event and resumes from the exact step.
What we guarantee.
No AI mutation executes without human approval
No auto-approve mode — not even for scheduled runs
No fabricated data — MCP rules enforce truthful reporting
Full audit trail on every proposed and executed action
All safety code is open-source and auditable
Self-hosted — your data never leaves your infrastructure
Threat Assessment
8-Point Agent Security Audit.
Scored independently across two connection modes: the Web App (gateway-mediated) and the MCP Proxy (Claude Desktop via API key). Higher is better.
Why the proxy scores higher: Stateless design eliminates memory accumulation risks. Zero outbound HTTP removes data exfiltration vectors. Per-tool API key scoping provides least-privilege access. Zero API cost removes unbounded consumption risk entirely. Full audit methodology in AGENT_SECURITY_AUDIT.md
Verify it yourself
Read the code. Run the stack. Audit every safety mechanism.
Every claim on this page maps to open-source code in the community repository. Clone it, inspect it, and decide for yourself.