ATEF ATAYA
HomeBlogProjectsSponsorshipsShopContact
ATEF ATAYA

AI Educator & YouTuber sharing insights about artificial intelligence, automation, and the future of technology.

Quick Links

  • Home
  • Blog
  • Projects
  • Sponsorships
  • Contact

Connect

© 2026 ATEF ATAYA. All rights reserved.

Back to Blog

Before You Delete That File, Ask Depwire What Breaks

April 19, 2026
Originally on Medium
AI coding assistants are fast. That’s also the problem.

You’re refactoring a TypeScript monorepo. You ask Claude or Cursor to delete src/utils/encode.ts — it's been sitting there, seemingly unused. The AI agrees, removes it, and your CI pipeline turns red. Thirty import chains are broken across 18 files. The AI had no idea those dependencies existed.

This is not a hypothetical. It’s what happens in any codebase large enough that no single developer holds its full dependency graph in their head — which is every production codebase.

Depwire’s What If simulation exists to solve this exact problem. Here’s how it works, and how to use it to make AI-generated code changes safer.

The Core Problem: AI Tools Have No Map

Every AI coding assistant — Claude, GPT-4, Cursor, Copilot — starts each session with zero knowledge of your codebase’s dependency structure. They read files linearly. They understand syntax. What they cannot see is the graph: which file depends on which symbol, across how many layers, and what breaks if a node disappears.

A developer with six months on a codebase develops an intuition for this. AI tools don’t accumulate that intuition. They operate on what’s in their context window, and dependency graphs are rarely in the context window.

The result is a predictable failure mode: AI-generated changes that are locally correct but globally destructive. The function signature looks right. The import compiles. But three files you weren’t looking at are now silently broken.

What Depwire’s What If Simulation Do

Depwire builds a deterministic dependency graph of your codebase using tree-sitter — a production-grade, language-agnostic parser. It tracks every import, export, symbol definition, and cross-language API edge across 11 languages, including TypeScript, Python, Go, Rust, Java, C++, and PHP.

The What If simulation takes this graph and runs a hypothetical: what happens to the graph if you delete or modify a specific file?

It answers:

  • How many imports break?
  • Which files are directly and transitively affected?
  • What is the health score delta?
  • Which cross-language API connections break?

And it shows the before/after visually — two arc diagrams side by side, with broken connections highlighted in red against a dimmed ghost of the current codebase.

How to Use It

Install:

npm install -g depwire-cli

Run a simulation:

depwire whatif . --simulate delete --target src/utils/encode.ts

Or open the browser UI:

depwire whatif . --simulate delete --target src/utils/encode.ts

The browser UI opens automatically and shows two arc diagrams. Left is the current state. Right is the simulated state — affected nodes highlighted in red, broken edges rendered as thick dashed red lines, everything else dimmed to 8% opacity so the damage is immediately visible.

Real Numbers: honojs/hono

We ran this against the Hono framework codebase — 352 TypeScript files, 6,245 symbols, 2,133 dependency edges.

Simulating the deletion of src/utils/encode.ts:

Broken imports:   30
Affected files: 18
Health score delta: -8

Thirty broken imports from a single utility file. The AI had no way to know this without Depwire. With Depwire, you know before you touch anything.

The risk badge reads High, which means you either need to refactor the dependents first or you need a migration plan, not a deletion.

The MCP Integration: AI Agents That Know the Graph

Beyond the CLI, Depwire exposes a simulate_change MCP tool that AI assistants can call directly:

{
"tool": "simulate_change",
"params": {
"action": "delete",
"target": "src/utils/encode.ts"
}
}

When Claude Desktop or Claude Code has Depwire connected as an MCP server, the AI can check its own proposed changes against the graph before committing to them. It doesn’t have to guess what breaks. It can ask Depwire, get a structured JSON response, and reason about the blast radius before writing a single line.

Configure it in your Claude Desktop config:

{
"mcpServers": {
"depwire": {
"command": "npx",
"args": ["-y", "depwire-cli", "mcp"]
}
}
}

From that point on, Claude can call simulate_change, impact_analysis, and 15 other graph-aware tools against your actual codebase — not a probabilistic guess about it.

Where This Fits in an AI-Assisted Workflow

The practical workflow looks like this:

  1. Ask your AI assistant to propose a refactor
  2. Before accepting, run depwire whatif on the files it wants to change
  3. If the blast radius is acceptable, proceed
  4. If not, feed the impact analysis back to the AI and ask it to sequence the changes safely

With the MCP integration, steps 2 and 3 can happen automatically inside the AI’s reasoning loop. The AI proposes, checks, reconsiders, and proposes again — with actual dependency data, not guesswork.

This is the difference between vibe coding and engineering. Both produce code. One knows what it’s doing to your architecture.

Install

npm install -g depwire-cli

GitHub: github.com/depwire/depwire

Supports TypeScript, JavaScript, Python, Go, Rust, C, C#, Java, C++, Kotlin, and PHP. Works with Claude, Cursor, VS Code Copilot, and any MCP-compatible AI tool.

Back to all posts