AI Lyric Red Flags: 87 Banned Phrases That Reveal AI Output
Every large-language model trained on internet text inherits the same bag of lyric clichés. SongForgeAI scans every forge output against a curated list of 87 banned phrases — the words and patterns that mean a human writer would not have chosen them, but the path of least resistance for a generator did. Here is the list, the reasoning, and what to do when one of yours flags.
Why a banned-phrase list at all
Most quality measures for AI-generated text are about whether the output is GOOD. The banned-phrase scanner is a different test: whether the output is visibly AI. A song can pass every quality metric and still read as machine-written if it leans on the same six tells everyone hears in every Suno render.
The 87 phrases on this list aren’t banned because they’re bad in the abstract — Bob Dylan can write a song with "neon" in it. They’re banned because they signal the writer didn’t reach. The model picked the lowest-perplexity continuation, and the listener feels the lack of effort even before they can articulate why.
The categories
The list breaks into roughly five clusters:
- Atmospheric clichés. "Neon," "echoes," "shadows," "whispers," "shimmer," "glow." Words that paint a vibe without naming a thing. The model loves them because they’re universally evocative; the listener tunes them out for the same reason.
- Romantic tropes. "Heart on fire," "stars align," "forever and always," "love is a battlefield." Phrases that have been the title of 47 songs. The model uses them as scaffolding; a working songwriter knows they’re scaffolding and writes around them.
- Action verbs that don’t act. "Shatter," "ignite," "soar," "embrace," "transcend." Big-emotion verbs used as decoration rather than narrative. The line uses them because the writer needed a word; the listener wants a thing being done.
- Object metaphors. "Tapestry," "kaleidoscope," "labyrinth," "mosaic." Words that imply the song is more sophisticated than its content. They’re tells because no one reaching for the right word lands on "tapestry."
- The "I find myself" pattern. "I find myself wondering / drifting / falling / running." A specific syntactic tic that almost no human songwriter uses but every introspection-prompted LLM defaults to. (See M8 Voice & POV Integrity for why this collapses narrator identity.)
What happens when a flag triggers
SongForgeAI runs the scanner post-generation and surfaces matches inline on the song detail page. When you see a flagged term, the question isn’t "is this word bad?" — it’s "did I REACH, or did I default?"
If the line genuinely needs the word, keep it and replace something else for the specificity loss. If you defaulted, the fix is usually a more concrete image. "Neon-lit memories" → "the diner’s tube-light at 2:14." "Stars align" → "the Pleiades cleared the cypress." Specific replaces atmospheric every time, and specific scores higher on M5 Specificity, M6 Image Discipline, and M11 Memorability.
The full list (sample)
Without dumping all 87, here’s a representative sample. The complete enforcement list lives in src/lib/banned-terms.ts on GitHub — public source, CC-BY 4.0 same as the rubric:
- neon, echoes, shimmer, shatter, ignite, soar
- heart on fire, stars align, forever and always
- tapestry, kaleidoscope, labyrinth, mosaic
- "I find myself ___ing" (the syntactic pattern)
- whispered secrets, dancing in the rain, paint the sky
- etched, woven, carved (when used as decoration verbs)
- silver moonlight, golden sunsets, crystal clear
If your draft has more than three of these, the LLM-default pattern is dominant and the lyric reads as machine-written before any individual line lands. Score-and-strip via /forge is the fastest fix.
Why publishing this list matters
The list is open source on purpose. Any third-party AI lyric tool that wants to ship "fewer cliché tells" can import the same list. It’s on npm as part of the Lyric Scoring Standard package. Adoption is the point — the more tools enforce these tells, the faster average AI lyric quality moves.
The list is also versioned. New tells get added when they emerge in the wild (every model release coins one or two); occasionally a phrase comes off the list when a generation of human writers reclaims it. The current version + diff is at /scoring/standard/changelog.