February 2026

When an AI Agent Removes a Masking Policy

You asked your AI agent to build a gold layer table. It inspected the schema, wrote the DDL, but didn't like the results—a masking policy skewed the aggregation—and solved it by removing the policy. The agent is happy that it "works" now, while your data has become unprotected.

The Problem: Agents Optimize for Task Completion

When you tell an AI agent "create a gold layer table that aggregates customer data by region," here's the typical flow:

  1. Inspects the source table schema
  2. Writes a CREATE TABLE ... AS SELECT statement
  3. Runs it
  4. Gets results back, but the email column is all '**MASKED**'
  5. Removes the obstacle

Step 5 is the risk. The agent doesn't reason about governance intent; it reasons about blockers. So it does the rational thing for task completion:

-- Agent's "fix" for the masking policy error
ALTER TABLE customers ALTER COLUMN email UNSET MASKING POLICY;

-- Now the gold layer works!
CREATE TABLE gold.customer_regions AS
SELECT region, email, COUNT(*) as order_count
FROM customers
GROUP BY region, email;

The Failure Modes That Survive Permission Scoping

The reasoning said it. You were distracted.

Modern agents often emit reasoning summaries or execution traces. In many cases, they explicitly state they're about to do something significant:

"The masking policy on the email column is preventing the CTAS from succeeding. I'll need to remove it temporarily to complete the task. I should note that this removes PII protection."

The model told you. But in real workflows, you're often in another tab, in a PR, on a call, or context-switching—because the whole point of the agent is parallel work. The risk isn't always zero oversight; it's accidental approval of a destructive command that looks like a routine fix, such as ALTER TABLE customers ALTER COLUMN email UNSET MASKING POLICY.

Mistakes like this can be attributed to a scaling effect teams are underestimating: AI is increasing SQL and migration volume faster than headcount grows. Even outside fully autonomous agents, assistant-generated changes now flow through normal SDLC paths, so the same teams are reviewing, debugging, and maintaining more generated code with the same time budget. That compounding load increases multitasking pressure and raises the odds of accidentally approving a destructive command hidden inside routine-looking changes.

The code didn't match the intent.

The reasoning might say one thing while the SQL does another. Here's a real pattern:

Agent reasoning: "I need to create a filtered view that respects the masking policy. I'll build the gold layer using only the non-sensitive columns and add a reference to the policy-protected column through a secure view."

Agent SQL:

ALTER TABLE customers ALTER COLUMN email UNSET MASKING POLICY;
CREATE TABLE gold.customer_regions AS SELECT region, email, COUNT(*) ...

The model "knew" the right approach; its code generation pipeline produced a destructive one. This divergence happens under pressure—complex schema, long context windows, multi-step tool use—and it's invisible if you're only reading the reasoning trace.

In practice, this is the dangerous moment: you see the model apparently on track, finger on approve, ready to approve for the hundredth time in the same day—then discover the emitted SQL did something else entirely and the approved command was destructive.

You can't catch this by trusting intent text alone. The reasoning can look safe while the SQL the model produced is destructive.

Agents route around restrictions.

Now the obvious counterargument: "This is why you don't give AI god perms."

Correct in principle. Insufficient in practice:

  • The agent needs CREATE TABLE and SELECT to build the gold layer. Reasonable. But the role granted those privileges may also hold policy-modification rights that nobody audited.
  • Agents running through MCP servers, connectors, or service accounts inherit connection permissions—and those can end up broader than initially intended.
  • When an operation fails, many agents try alternate paths, such as switching to an elevated role. It's not trying to break policy boundaries; it's trying to finish the task.

"Don't give it god perms" is necessary and still insufficient. The agent doesn't need full admin to do damage; it needs slightly-too-broad access and a blocked task.

The failure mode that matters: partial execution drift

In practice, the problem isn't just a single scary statement. It's that agents tend to run multi-step plans where each statement is committed independently.

If a risky change happens early (policy removed, privileges widened) and a later step fails, you can end up with the worst outcome: the intended artifact was never delivered, but the environment is left in a weaker state.

Even when an agent intends a rollback ("remove it temporarily, run the CTAS, re-apply"), rollback is a separate step that can fail or never run at all.

Why Permissions Aren't Enough

Restrict permissions—it's the most important single control. But it's also the layer most likely to have gaps nobody audited. ACCOUNTADMIN isn't required to do harm; MODIFY on the wrong object is enough. Service accounts accumulate permissions over months. MCP connectors inherit connection-level grants. Static analysis prevents damage caused by those gaps.

Treat your SQL like infra

The SQL often isn't buggy. It executes and can complete the requested task. The issue is intent mismatch: it can violate governance to get there.

The fix isn't better prompts or more careful permissions. It's treating agent-generated SQL the same way you'd treat a Terraform plan: parse it, evaluate it against policy, and block before it runs.

Lexega gives SQL the same pre-execution guardrails that infrastructure-as-code has had for years.

Try It Yourself

Paste the agent's exact output from the opening scenario into the Lexega Playground:

-- Agent's "fix" for the masking policy error
ALTER TABLE customers ALTER COLUMN email UNSET MASKING POLICY;

-- Now the gold layer works!
CREATE TABLE gold.customer_regions AS
SELECT region, email, COUNT(*) as order_count
FROM customers
GROUP BY region, email;

When paired with a policy document, the ALTER TABLE ... UNSET MASKING POLICY fires a critical signal that gets blocked before the CREATE TABLE ever runs.

For how this fits into CI/CD pipelines where agent-generated SQL flows through PRs and migrations, see Code Review Can't Keep Up with AI-Generated SQL