Help build a 5-year reference corpus.
The corpus ships at 12 entries. Target for v1.0.0: 1000+ entries across multiple genres and writers, every entry hand-scored, every entry traceable to a public artifact. Community contributions are how it gets there.
Contributions are gated by quality, not volume. We’d rather ship 30 excellent entries this quarter than 300 mediocre ones. The 5 gates below are non-negotiable.
Working songwriter? We’re also onboarding 10 paid raters at $500/quarter to calibrate the rubric against expert judgment. See the Paid Rater Program.
The five quality gates
- 1
The lyric is excerpted under fair use OR fully owned.
Quoted public-domain originals OK. Quoted contemporary work OK with attribution under fair use for criticism + commentary. Original AI-generated lyrics OK. Other-AI-generated lyrics from a competitor are NOT OK without explicit permission.
- 2
The score is independently verifiable.
Run the lyric through /api/v1/score against the current published rubric (Lyric Scoring Standard v1.1.0). Include the response body (with seal) in the PR description. If your hand score differs from the API by more than 8 points, document the disagreement explicitly.
- 3
The rationale is plain-spoken.
Two to four sentences explaining what drove the score band. Not marketing copy. Not a summary of the lyric. Identify the specific elements (Specificity, Voice, Rhyme intelligence, etc.) that lifted or capped the score.
- 4
The notable list is concrete.
One to three bullets identifying signal-bearing moments. "Strong specificity" is not a notable. "The line `coffee can next to the matches` carries the entire memory" is.
- 5
The entry advances the calibration.
Entries near scores already represented (we have a 78, a 76, an 82) need to demonstrably differ on tier-balance or genre or failure mode. Entries at score bands underrepresented (currently: 25-35, 40-55, 95+) are accepted on weaker novelty grounds.
The PR workflow
- 01
Open a PR against /public/scoring-corpus-v1.json.
Add your entry as a new array item. Keep the same field shape as the existing entries — id, title, source, attribution, genre, lyrics, composite, tier, band, rationale, notable.
- 02
Increment the corpus version.
Append a corpus-NNN id (zero-padded, sequential). The corpus version field bumps to 0.1.<N+1> for additions; the rubricVersion field stays pinned to whatever rubric your entry was scored against.
- 03
PR description includes the API response.
Run /api/v1/score on the lyric, paste the JSON response (with seal) into the PR description. The reviewer can re-run it to verify the seal matches.
- 04
Reviewer triage.
A maintainer evaluates against the 5 quality gates above. Accept / accept-with-edits / decline. Decline always includes a reason tied to a specific gate.
- 05
Merge → ship.
Accepted entries ship in the next deploy. The /scoring/corpus page rebuilds at deploy time and the entry becomes citable at the same URL.
What we will reject without comment
- Lyrics from a competitor's AI tool without explicit permission.
- Entries scored against a rubric version other than the live one.
- Rationale that just summarizes the lyric without naming signals.
- Notable lists with marketing-grade vagueness ("strong vibes").
- Bulk submissions (more than 5 entries in a single PR).
Ready to contribute?
Open a PR against public/scoring-corpus-v1.json on GitHub. Tag it with corpus-contribution so a maintainer triages it within 7 days.