The Songwriter Index 2026
The first annual report on AI-assisted songwriting craft. Drawn from songs scored on SongForgeAI against the open 12-metric Lyric Scoring Standard. Open data, open methodology, citeable.
What this is
The Songwriter Index is an annual report. Each edition draws on the full year's songs scored on SongForgeAI — currently 1,500+ per quarter — and analyzes them across the 12 metrics of the open Lyric Scoring Standard, plus the 5 deterministic craft analyzers (Stolpe Index, POV stability, Rhyme Complexity, Singability stress, Emergent Cliché).
The questions the index answers, by intent:
- →Which genres show the strongest craft signal year-over-year?
- →What craft patterns recur across the AI-assisted-lyric corpus — and what patterns finally break?
- →Which Stolpe-Index sections (verse/chorus/bridge balance) hit target ranges, and which drift?
- →What proportion of high-scoring songs are AI-generated vs human-edited vs human-original (per the Human Contribution Log)?
- →Which ghost-collaborator voices produce the highest-scoring output across genres?
The data is uniquely ours — no other lyric tool publishes a scored corpus at this scale. The methodology is open, the metrics are open, the report is free to cite under CC BY 4.0.
Methodology
Data source
Songs forged on SongForgeAI in the calendar year, scored against the Lyric Scoring Standard v1.0. Excludes songs flagged for moderation, songs that failed scoring, and songs with status other than ‘complete’.
Anonymization
Per-user data is aggregated, never published with user attribution. Songs explicitly opted into public viais_public = truemay be cited individually with their public share URL; all other songs contribute only to aggregates.
Metric set
12 rubric metrics (Hook Strength, Sensory Specificity, Emotional Truth, etc. — full list at /scoring/standard) plus 5 deterministic craft analyzers (Stolpe Index, POV stability, Rhyme Complexity, Singability stress, Emergent Cliché). Each finding cites which metrics support it.
Sample-size guards
Genre breakdowns require N ≥ 50 per genre to publish. Ghost-collaborator findings require N ≥ 20 per ghost. Per-section Stolpe findings require N ≥ 100 per section type. Below thresholds, the finding is suppressed and reported as ‘insufficient data this period’.
Versioning
The rubric is versioned (currently v1.0). When the rubric bumps, year-over-year comparisons cite both versions and document the diff. The full version table lives at /scoring/standard.
Reproducibility
Each finding cites the model id, rubric version, and date range used. The aggregation queries themselves are not published (database access required), but the scoring-standard.json shape + the per-song eval payload shape are both public — anyone implementing the standard against their own corpus can reproduce the methodology.
Cadence
The Index publishes once per year as a full annual report, with a Q4 preview release. Major rubric changes between editions trigger an interim edition documenting the shift.
2026 findings
Aggregating. Q4 2026 preview release will populate this section. v0 ships the framework + methodology so the citation is establishable now; the data fills in across the year.
Which genres show the strongest average craft scores?
What proportion of choruses hit the target external/internal balance range?
Which structural patterns produce the strongest Hook Strength scores?
What is the median human-authorship percentage of the top 10% of scored songs?
Which of the 8 ghost voices produces the highest-scoring output, and in which genres?
Which year-over-year clichés appeared most often, and which finally faded?
Citation
songforgeai.com/songwriter-index. Published 2026-04-27. CC BY 4.0.
This work is licensed under CC BY 4.0. Reuse, quote, embed, build on freely with attribution.