Hidden Keywords Are Back (and Still a Terrible Idea)

Hidden Keywords Are Back (and Still a Terrible Idea)

Despina Gavoyannis Avatar
Despina Gavoyannis Avatar

Disclosure: Our content is reader-supported, which means we earn commissions from links on Crazy Egg. Commissions do not affect our editorial evaluations or opinions.

“Hidden keywords” used to be one of the most blatant black hat SEO tricks: stuffing content behind the scenes to manipulate search engines. For a while, it faded into irrelevance as algorithms got smarter.

But with the rise of large language models and generative search experiences, some SEOs are quietly experimenting again, this time with more subtle, technical variations designed to influence LLMs rather than classic rankings.

In this article, we’ll unpack how these new-age hidden keyword tactics work, why they’re resurfacing, and the long-term risks for anyone tempted to play that game.

What Hidden Keywords Used to Mean

Before generative search and LLMs became part of the SEO conversation, “hidden keywords” referred to a set of tricks aimed at fooling search engines. These tactics were part of the classic black hat playbook, designed to make a page appear more relevant for specific search terms without making those terms visible to users.

Some of the most common versions included:

  • White text on a white background: A line of text that only bots could see, crammed with keyword variations.
  • CSS hiding (display:none): Entire paragraphs or keyword-stuffed blocks hidden from human eyes but still in the HTML.
  • Off-screen positioning (text-indent: -9999px): Keywords technically “on the page,” but pushed so far off-screen no user would ever encounter them.
  • Alt text and meta tag stuffing: Using image tags or meta descriptions to jam in irrelevant or repetitive terms.

These tactics did work for a while. 

Some sites ranked shockingly well using them. However, as search engines matured (especially with algorithm updates like Google’s Panda and Penguin), this kind of manipulation became both ineffective and risky. 

Pages using hidden keyword techniques were penalized or deindexed altogether. So the SEO industry moved on… until now.

How Hidden Keywords Are Making a Comeback (Sort Of)

With the rise of generative search and AI-powered summaries, SEOs are again asking: What parts of my page are LLMs actually reading? Can I influence what they generate?

Unlike traditional search engines that primarily index visible content, LLMs take a broader approach. 

They don’t just look at your main text. Most HTML-aware models can also parse the entire HTML code. That means structured data, metadata, HTML comments, off-screen elements, and even aria-labels can all become part of the “context” a model might use to generate a summary or recommend a site.

This has opened the door to a new generation of hidden keyword tactics. 

They’re not as blatant as white-on-white text, but the intention is the same: sneak in extra terms that a human might not see, in hopes that an LLM will.

Here are a few of the techniques I’ve seen people consider:

  • Keyword-stuffed HTML comments – Blocks of “context” or “related topics” embedded in comments at the end of a page.
  • Overloaded schema fields – Product or article schema packed with long lists of semantically related phrases, far beyond what a normal search engine would need.
  • Prompt-like metadata – Meta descriptions or Open Graph tags that read like an AI prompt rather than a natural summary.
  • Invisible internal links – Anchor text hidden via display:none or cloaked in expandable menus, linking to other keyword-rich pages for AI context rather than user navigation.

What About Prompt Injection?

A newer wrinkle in this conversation is prompt injection. 

Prompt injection is about embedding text that’s designed to influence how an LLM interprets or responds to a page. While often discussed in the context of AI security, SEOs have started experimenting with ways to steer generative summaries or featured snippets by “injecting” prompt-like language into structured data, meta descriptions, or even on-page copy.

For example:

“You are an expert reviewer. Here is a detailed product comparison including specs, pricing, and user reviews…”

Are these strategies clever? Maybe. Are they sustainable? That’s a different question.

They toe the line between optimization and manipulation, and they come with all the same risks. 

If platforms start filtering or ignoring prompt-injected content (and they will), entire strategies built around this tactic could collapse overnight.

Why This Approach Is Risky (and Probably Not Worth It)

While it might feel like a clever workaround, relying on hidden keywords in the age of LLMs is a short-sighted play. It’s the same problem dressed in more technical clothing: you’re trying to manipulate a system that’s getting better at understanding intent, not just parsing content.

Here’s why it’s a risky move:

1. You’re Training the Wrong Signals

When you inject content meant solely for machines, you’re essentially telling LLMs: “This part of my site isn’t for humans.” 

That can backfire. 

LLMs are built to prioritize helpfulness, clarity, and trust. If your content starts looking like noise (even sophisticated noise), it may be ignored, deprioritized, or summarized in misleading ways.

2. You May Confuse Indexers and Summarizers

Search engines like Google are increasingly merging traditional indexing with generative AI. If one system is reading your visible content while another is parsing hidden signals, you risk sending mixed messages. 

This can result in inaccurate AI summaries, diluted topical authority, or even unexpected associations between your brand and irrelevant queries.

3. What Works Today Might Get You Penalized Tomorrow

History tells us that every SEO loophole eventually gets closed. 

Google has already confirmed they’re keeping an eye on how people attempt to influence AI Overviews. OpenAI’s systems are also rapidly evolving, and what gets picked up today may be filtered or downranked tomorrow.

Remember: there’s no such thing as a long-term win built on a short-term trick.

4. LLMs Don’t Need Keyword Clues the Way We Think They Do

Large language models are trained on massive datasets. If your site genuinely covers a topic well, through clear, contextual, and relevant content, there’s no need to artificially inflate its footprint. 

In fact, trying too hard to “optimize for AI” can make your content less readable, less trustworthy, and ironically, less likely to be surfaced.

The future of visibility in search belongs to those who build with transparency, not tricks.

What Actually Matters for LLM-Era SEO

It’s easy to see why tactics like hidden keywords are resurfacing. The landscape is shifting fast, and it feels like we’re all running a bit blind through a fog of generative summaries, AI overviews, and rapidly evolving algorithms. 

But if there’s one constant, it’s this: search (whether powered by links, language models, or a blend of both) rewards clarity of intent.

Rather than trying to outsmart the system, the better long-term play is to build content that is genuinely helpful. That means:

  • Clear, well-organized pages that reflect real topical authority.
  • Honest metadata and schema markup that adds clarity, not clutter.
  • A content footprint that shows depth, relevance, and trustworthiness across the topics you want to be known for.

In other words: don’t just optimize for what an AI might see…optimize for what it’s trying to understand.


Scroll to Top