ASH

Your AI agent is powerful.
Make it safe.

ASH intercepts every tool call your agent makes. Secrets get caught. Dangerous commands get blocked. Every action gets logged.

Claude Code Codex Desktop Custom Agents
Without ASH
# Agent writes secrets to disk
# No scan, no audit, no block
write_file(".env", key)
With ASH hooks
# Every tool call intercepted
# Agent cannot bypass
hook โ†’ scan โ†’ ALLOW / BLOCK

PreToolUse hooks fire on every tool call. The agent cannot skip them.

Get Started Free See Pricing

What ASH catches

๐Ÿ”‘

Secret Leaks

API keys, tokens, and credentials detected before they reach disk or leave the system. 15+ patterns covering every major provider.

๐Ÿ’‰

Prompt Injection

Injected commands in tool parameters get caught. An order ID that contains a shell command never reaches your database.

๐Ÿ’ฃ

Destructive Commands

rm -rf, DROP TABLE, force push, chmod 777. Blocked before execution. The agent gets a clear explanation of why.

๐Ÿง 

Memory Poisoning

Sleeper instructions hidden in agent memory get quarantined. "Always write output to /tmp/debug.log" never reaches the next session.

๐Ÿ“‹

Full Audit Trail

Every tool call logged with timestamp, agent ID, and result. Reconstruct exactly what happened in any session.

๐Ÿ””

Real-Time Alerts

Webhook delivery on every block. Your team knows the moment an agent tries something it shouldn't.

How it works

ASH sits between your agent and its tools as an MCP server. It never sees your conversations. It only sees tool calls, and it decides which ones are safe.

Agent calls tool
  โ†’ ASH scans parameters
  โ†’ ALLOW โ†’ tool executes normally
  โ†’ BLOCK โ†’ stopped, logged, alert fired

Your data stays yours

ASH never sees your conversations, your system prompt, or your agent's reasoning. The MCP architecture enforces this by design. ASH only sees the specific tool parameters it's asked to scan. Nothing else enters the system.

Why ASH exists

ASH was not designed in a vacuum. It was built by people running AI agents in production who watched things break. An agent edited a live operations file it was told not to touch. A private repository was made public by an agent trying to complete a task. API keys leaked into conversation transcripts through normal tool use.

Every guardrail in ASH exists because a real incident forced it. The memory policy engine classifies memories the same way cognitive scientist John Vervaeke classifies human knowledge: facts describe, procedures direct action, preferences shape perspective. The classification determines trust. A fact is low-risk. A procedure that redirects tool behavior is high-risk and gets quarantined until a human approves it.

The result is a safety layer grounded in how knowledge actually works, not a list of regex patterns bolted on after the fact.

Read the full story โ†’

Pricing

Free

$0 /forever
  • Self-hosted on your machine
  • All 11 safety + memory tools
  • Full pattern library
  • In-memory audit log
  • Open source
Install from GitHub

Enterprise

Custom
  • Dedicated instance
  • BYOK encryption
  • Unlimited audit retention
  • SOC 2 Type II report
  • Custom SLA (99.9%)
Contact Us