Every digital ad bought today is paid for under the assumption that something on the other end is human. That assumption used to be defensible. It isn't anymore.
Ad fraud is projected to cost brands $172 billion globally by 2028, up from roughly $84 billion in 2023 (Statista, citing Juniper Research). About 22% of global digital ad spend is currently lost to fraud. The Imperva 2024 Bad Bot Report found that 49.6% of all web traffic is non-human — 32% malicious bots, the rest benign automation. The ANA's programmatic transparency study found that $20 billion of the $88 billion programmatic ecosystem is waste, much of it Made-For-Ads sites that detection vendors have not been able to shut down.
Those numbers don't reflect a detection failure. They reflect a structural one. The detection layer has been losing for years. LLMs are about to make the loss permanent.
The major fraud-prevention vendors — DoubleVerify, Integral Ad Science, Human Security, Pixalate — operate in two stages. Pre-bid filtering blocks known bad IPs, suspicious domains, and suspicious traffic patterns before an ad serves. Post-bid analysis measures what got through and refunds advertisers for invalid traffic.
This is real work. It's not nothing. IAS has published data showing campaigns running with their fraud protection at 0.7% Invalid Traffic, versus 10.9% for unprotected campaigns. That's a 15x improvement. Detection vendors aren't selling snake oil — they're selling a real service that genuinely reduces fraud at the margin.
The issue isn't that detection doesn't work. The issue is what detection has been built on.
Strip away the marketing language and detection comes down to recognizing patterns of bad behavior:
IP reputation. User agent strings. Click cadence and inter-click timing. Dwell time and scroll patterns. Device fingerprints (font lists, canvas hashes, GPU info, plugin sets). Behavioral baselines built from session data. Geographic plausibility. Pixel-level mouse movement and entropy.
Each of these layers was designed to catch a category of bot — early scripted bots, then headless browsers, then more sophisticated automation frameworks. Detection vendors have done remarkable engineering work to keep these layers tuned as fraudsters got more sophisticated.
But every one of those signals can now be generated by an LLM-driven agent at trivial cost.
An LLM agent operating with a real residential IP, a real consumer browser, and a behavioral model trained on actual human session data does not look like a bot at any signal layer detection currently uses. The IP is real. The user agent is real. The click cadence matches a behavioral baseline because the agent is mimicking one. Dwell times are realistic because the model can read content and pause appropriately. Scroll patterns are smooth because they're synthesized from real scroll data.
The economics matter here. Generating sophisticated bot traffic in 2018 was an arms race because it required real engineering — botnets, proxy rotation, fingerprint spoofing toolchains. Each round of detection improvement forced another round of fraudster investment. That arms race was expensive enough on the offense side to keep some equilibrium.
LLMs collapsed the cost of the offense side by orders of magnitude. An agent that runs a synthetic browsing session, reads an ad, generates a plausible click pattern, and even passes basic post-click engagement signals can now be deployed for cents per session. The defense costs the same as it always did.
DoubleVerify's 2024 Global Insights Report found that 54% of advertisers say generative AI has degraded media quality. They report a 19% year-over-year increase in Made-For-Ads impression volume in 2023. The trajectory is not subtle.
The arms race was always asymmetric — offense moves first, defense reacts. LLMs make the asymmetry permanent.
Detection vendors will keep selling defense, and the defense will keep partially working. The 0.7% vs 10.9% IAS data is real. Detection is meaningful at the margin and will remain so.
But the structural ceiling on detection is real and getting lower. The honest implication is that advertisers who continue to pay-per-impression are increasingly subsidizing AI-vs-AI loops where neither side benefits from the underlying message reaching anyone who would have cared about it. The brand pays for impressions. A bot consumes them. A different bot reports them. A detection vendor catches some fraction. The remainder is paid for and disappears.
That's not a problem detection can solve. It's a problem with the unit of payment.
The interesting question is what happens when the unit of payment changes from "impression" or "click" to "verified human attention with proof of comprehension."
A few existing models gesture at this. Paid attention systems like Brave Rewards pay consumers to view ads, but without comprehension verification — the unit of value is still "exposure," which AI users can fake. Verified panel businesses like Wynter and NewtonX run paid B2B research panels that verify identity and pay for structured feedback, but they sell research, not advertising — the buyer is doing pre-campaign message testing, not running the actual ad campaign through the panel.
The space that doesn't yet exist at scale is verified-engagement advertising — where a brand pays only when a real, identity-verified human reads their content and proves they understood it. The unit of value becomes proof of comprehension, not exposure or claimed attention.
I'm building one version of this at Nexertise. The platform pays senior professionals to read short B2B content (3-5 minutes), pass a comprehension check, and submit structured feedback. Identity is verified at signup via Stripe KYC. Brands only pay when the comprehension check passes. We're pre-launch and recruiting senior engineers as founding members for the May 21st pilot — if the framing of this essay resonates and you're an engineer at a top tech company, the signup page is at nexertise.com/founding.
That's not the only possible structural answer. It's the one I find most defensible against the failure mode this essay describes.
The structural approach has its own failure modes. The most obvious failure mode: a user runs an LLM on the content and pastes the answer to the comprehension check. The comprehension check alone can't stop this — and the platform isn't built to assume it can. The defense is structural rather than puzzle-based.
Content is rendered to canvas, not DOM, so it can't be trivially fed to an LLM without OCR overhead. Invisible prompt-injection canaries flag any session that ingests the page through an LLM. Paste detection and keystroke dynamics catch out-of-distribution input patterns. Focus telemetry registers tab-switches to ChatGPT or similar tools. Payments sit in escrow until reviewed, with risk scores sorted highest-first. Identity is bound at signup via Stripe KYC; three strikes against a verified identity ends the account permanently, and the same person can't re-verify.
Each layer alone is defeatable. Defeating all of them at once, repeatedly, while staying under the strike threshold, is the specific economic problem the architecture is designed to make unprofitable. Whether it succeeds at scale is an empirical question the pilot will start to answer.
Supply also has to scale faster than demand, because audiences deplete per-advertiser. And the comprehension-check itself has to be designed carefully — too easy and it's gameable; too hard and you scare off legitimate readers.
I'm not certain I have the right answer. I am certain the question is unavoidable.
Detection-based defense in advertising was built for a world where the cost of generating realistic human-looking engagement was high enough to maintain equilibrium. That world ended sometime in 2023 and we haven't fully absorbed the implications yet.
— Yisrael Gottlieb. 20-year-old solo founder. Built Nexertise with Claude Code over 5 months. Just applied to YC. yisrael@nexertise.com.