session: baymax@localhost | mode: scan
baymax v1.0.0 • ai-agent-permission-audit
LOCAL CLI • SECURITY AUDIT

Permission drift is quiet. Baymax is not.

Baymax scans AI coding agent configs for dangerous "always allow" settings before they become long-lived trust debt across shell access, filesystem writes, network permissions, and MCP integrations.

What Baymax actually checks

It normalizes each agent's config shape into a common risk model, so findings are comparable across teams and tools.

Agent Signals Baymax scans
Claude Code allowedTools, permissions.allow, MCP server permissions and env-secret exposure
Cursor permissions.allow, trustedPaths
Codex CLI approval_policy: auto, full_auto: true, sandbox.enabled: false
Gemini CLI trustedFolders, sandboxEnabled: false, MCP server trust
GitHub Copilot permanentlyTrustedDirectories, networkAccess: true
Aider yes: true, auto-commits: true, shell: true

How it works in practice

Discover

Find project-level and global config files where persistent trust is stored.

Classify

Map each permission to a rule, assign risk, and escalate when persistence + global scope increase impact.

Remediate

baymax fix opens interactive cleanup: medium/high prechecked, low optional.

Core commands

Install from npm, then run locally or in CI.

npm install -g baymax-cli
baymax scan .
baymax scan . --depth 3
baymax scan . --json
baymax fix .
baymax export --md --output ./security-report.md

Risk engine

Baymax audits capabilities, not intent. It scores what the permission enables.

HIGH

Immediate concern. Unrestricted shell, sandbox disabled, or auto-approve-all policies.

MEDIUM

Known risky commands permanently trusted: git, curl, node, python, filesystem/network capabilities.

LOW

Constrained permissions with reduced blast radius; still tracked to prevent drift.

Get started

Install Baymax in under a minute and audit your AI coding agent permissions locally.

npm install -g baymax-cli
baymax scan .