Blog
/

The new rules of application security

Application security

The new rules of application security in the age of AI-generated code

The following is by Mesh Chief Information Security Officer Daniel Hooper

Software is no longer written solely by humans. Large language models, code assistants, and autonomous agents now generate boilerplate, business logic, integrations, and sometimes entire features with a single prompt.

This shift delivers enormous speed, but it also introduces a new class of security risk. AI systems optimize for functional output, not trust boundaries, threat models, or compliance. As a result, the assumptions behind traditional Application Security Testing (AST) no longer hold.

This article explains how AI-generated code reshapes the threat landscape, why existing AST workflows fall short, and what a modern, AI-aware AST pipeline must look like.

What is Application Security Testing (AST)?

Application Security Testing (AST) refers to the tools and practices used to identify vulnerabilities in software before (and sometimes after) it ships. Historically, AST focused on ensuring applications handle data safely, enforce access controls, and resist common attack patterns.

AST typically falls into three categories:

Article content

These approaches evolved in a world where humans wrote code incrementally, followed architectural conventions, and understood system boundaries. AI breaks those assumptions.

Why AI-generated code changes the threat model

AI-generated code isn’t just faster to produce–it’s structurally different. Its risks are architectural, not just syntactic.

1. No awareness of security architecture

AI does not understand trust zones, separation of concerns, or data-handling policies. It frequently produces code that works while quietly violating design intent.

Common examples include:

  • Mixing authentication and business logic
  • Bypassing service or network boundaries
  • Returning sensitive fields from public endpoints
  • Disabling certificate validation “for convenience”
  • Applying overly broad session or authorization scopes

2. Insecure defaults at scale

LLMs regularly generate:

  • Outdated cryptographic primitives
  • Hardcoded credentials or tokens
  • Relaxed CORS configurations
  • Unauthenticated internal endpoints

Each instance may seem minor, but at AI speed, these defaults multiply rapidly.

3. Expanded and accidental attack surface

AI often suggests exposing internal APIs, skipping authorization layers, or introducing permissive third-party libraries. The result is a broader, less intentional attack surface.

4. Threat-model erosion

Traditional security relies on developers understanding system boundaries. AI does not. It can introduce logic paths that invalidate an entire threat model without triggering obvious alarms.

5. Velocity overwhelms controls

One developer with an AI assistant can generate days or weeks of code in hours. Review processes and AST tooling designed for slower workflows struggle to keep up.

Why traditional AST falls short

Most AST tools were built to analyze predictable, human-authored codebases. AI-generated code exposes several gaps:

  • Unfamiliar patterns: LLMs blend frameworks, invent abstractions, or hallucinate libraries that don’t match existing rule sets.
  • Volume overload: Security teams cannot manually review the volume of AI-generated code flowing into repositories.
  • Missing context: Traditional AST flags insecure patterns but cannot detect violations of architectural intent or trust boundaries.
  • Dependency hallucinations: AI frequently introduces outdated, insecure, or nonexistent packages that evade standard checks.

The result: vulnerabilities aren’t just missed–they’re introduced faster than security teams can respond.

Evolving AST for the AI era

To secure machine-authored code, AST must move beyond pattern matching and become context-aware, continuous, and architecture-enforcing.

1. Treat AI-generated code as untrusted

AI output should be treated like third-party code: tagged, sandboxed, and subjected to stricter review. Commit metadata should identify AI-assisted changes and route them through enhanced security checks.

2. Shift security even further left

Security constraints must move upstream into the prompts themselves.

For example:

“Generate a secure file-upload endpoint in Flask. Validate MIME type, sanitize filenames, enforce size limits, and isolate storage.”

Secure-by-prompt becomes a first-class control.

3. Continuous static and dependency scanning

Every AI-generated artifact should undergo SAST, secret scanning, and software composition analysis (SCA) before merging. This catches insecure defaults and hallucinated dependencies early.

4. Enforce architecture and threat models

Security is not just about code correctness—it’s about design integrity. Teams need automated checks that validate data flows, API exposure, and service interactions against intended architecture. This is how guardrails are preserved as AI accelerates change.

5. Runtime validation and monitoring

AI-generated code can behave unpredictably. Dynamic testing and runtime protection ensure that:

  • anomalous API calls are detected
  • unauthorized data access is blocked
  • insecure defaults surface under real conditions

6. Close the feedback loop

Recurring issues should feed back into prompt templates, code generators, architectural standards, and security policies. The system must learn as fast as the AI does.

A modern AST pipeline for AI code

Below is a conceptual view of how a secure AI-integrated AST pipeline operates:

Article content

In this model, every AI-generated code segment passes through a multi-layered validation pipeline (from static analysis and architectural checks to dynamic runtime testing) before reaching production. The feedback loop ensures continuous improvement in both AI prompting and security enforcement.

Closing thoughts

AI is transforming how software is built. It increases speed, but it also bypasses assumptions that traditional security models depend on. To keep up, AST must evolve. It must be continuous, contextual, and architecture-aware. The goal is no longer just finding vulnerabilities before release–it’s governing machine-generated code so that speed does not come at the cost of safety.

If the future of development is human + AI, the future of security is AST that understands both.

Want more like this? Subscribe to Mesh Weekly.

Related posts

A note from our CEO
May 9, 2026
Mesh x Kalshi Partnership
Bam Azizi on CNBC
May 2, 2026
Mesh on CNBC
Capitol
April 25, 2026
CLARITY Act Explained
Mesh building
April 18, 2026
What We’re Building | April 2026
April 9, 2026
Mesh on CNBC, and the $100T case for tokenization
Crypto Payment Gateways
April 8, 2026
Crypto Payment Gateway APIs Explained
Security
April 4, 2026
How crypto is redefining payment security
Making Mesh work at scale
March 28, 2026
How we built Mesh
business of delay
March 21, 2026
The business of delay
Demystifying the Travel Rule for crypto and stablecoins
March 19, 2026
Travel Rule explained
What We’re Building
March 14, 2026
What We’re Building | March 2026
Word salad
March 7, 2026
Building around ambiguity
Multiple platforms
February 28, 2026
Can't read, won't buy
Banks and crypto
February 21, 2026
Crypto firms race for bank charters
Dials
February 14, 2026
AI needs programmable money
Crypto regulation
February 7, 2026
Q1 2026 regulatory trends
Mesh logo
January 31, 2026
A note from our CEO
Mesh Raises $75M Series C at $1B Valuation
January 27, 2026
Mesh Raises $75M Series C
On-chain loyalty
January 24, 2026
Let’s put loyalty points on-chain
Mesh building
January 17, 2026
What We’re Building | January 2026
Cross-chain bridging in action
January 12, 2026
LIVE: Cross-chain bridging!
2026
January 3, 2026
Our 2026 predictions
2025
December 20, 2025
2025: A year in review
Paxos x Mesh
December 15, 2025
Mesh Partners with Paxos
Liquidity
December 13, 2025
The future is liquid
Mesh building
December 5, 2025
What We’re Building | December 2025
Mesh x Coverd
December 5, 2025
Mesh Partners with Coverd
UK flag
November 29, 2025
Inside the UK’s new stablecoin framework
User knowledge
November 22, 2025
How to build “deep” user knowledge
Crypto tokens
November 15, 2025
Savviness ≠ spending frequency
Mesh building
November 8, 2025
What We’re Building | November 2025
Crypto onboarding
November 1, 2025
Onboarding is the real conversion bottleneck
Regulations
October 25, 2025
Q4 regulatory trends
Verification
October 18, 2025
AML/KYC is broken
Mesh building
October 11, 2025
What We’re Building | October 2025
Agentic payments
October 4, 2025
Where crypto and AI converge
Modern customer success
September 27, 2025
The future of customer success
AI and crypto converging
September 20, 2025
Introducing Mesh Wallet
Mesh is building
September 13, 2025
What We’re Building | September 2025
Crypto ecosystem
August 30, 2025
Fragmentation is a feature
Mesh engineering team
August 28, 2025
Engineering onboarding that works
Puzzle piece
August 23, 2025
What's needed for stablecoin adoption
Levl x Mesh
August 21, 2025
Mesh Partners with Levl
RedotPay x Mesh
August 13, 2025
Mesh Partners with RedotPay
Ripple x Mesh
August 12, 2025
Mesh Partners with Ripple
Trump's crypto report
August 9, 2025
Breaking down Trump's crypto report
Mesh x AEON
August 4, 2025
Mesh Partners with AEON
Indian flag
July 26, 2025
Mesh opens Bangalore office
Gabriele Galli at ETHMilan
July 19, 2025
Mesh world tour
Crypto wallet
July 12, 2025
Gen Z's growing influence
Stablecoins used for corporate treasury management
July 5, 2025
CFOs like stablecoins
Stablecoin adoption
June 28, 2025
Stablecoins are the new money layer
Mesh UX screens
June 21, 2025
How design builds trust
Senate floor
June 18, 2025
GENIUS Act Explained
Stablecoin Market Map
June 2, 2025
Stablecoin ecosystem's explosive growth
Different types of stablecoins
May 28, 2025
Stablecoins are transforming corporate treasuries
Mesh CEO Bam Azizi on Fox News
May 20, 2025
Stablecoin bill clears major hurdle
Mesh Raises $82M Series B
March 11, 2025
Mesh Raises $82M Series B
Shift4 + Mesh
October 28, 2024
Mesh Partners with Shift4
MetaMask and Mesh
September 17, 2024
MetaMask and Mesh Deepen Partnership
Mesh Verify screen
August 7, 2024
Crypto Wallet Identity Verification: How It Works
User attestation screens
August 1, 2024
User Attestation
Satoshi Test screen
July 30, 2024
Satoshi Test
Mesh Verify Screen
July 22, 2024
Mesh Verify
Converts screen
July 8, 2024
Converts
Mesh Payout Screen
June 27, 2024
Mesh Payout
Mesh Ramp Screen
April 29, 2024
Mesh Ramp
Mesh and MetaMask partnership image
April 16, 2024
Mesh and MetaMask Formalize Partnership
Mesh Pay Screen
April 1, 2024
Mesh Pay
Mesh Deposit screen
February 27, 2024
Mesh Deposit