Case 002 — A 30-song corpus, one writer, two-band Voice consistency
The voice-fingerprint API (B1221) can identify whether a single writer’s 30-song catalog reads as "identifiable" or "developing" voice consistency, and the result should match the per-song Voice metric averages.
Starting point
30 songs, single writer, ~2 years of catalog. Per-song Voice scores from /api/v1/score range 58 to 89 with no clustering visible to the eye.
Mean Voice across the catalog (computed by hand from 30 individual scores). Standard deviation appeared "wide" on visual inspection; no formal calibration done.
Result
Returned: { mean: 73, median: 75, stdDev: 9.4, consistency: 81, driftPerSong: 0.8, band: "identifiable" }Consistency 81 = "identifiable" band. The visual impression of "wide" was wrong — stdDev 9.4 on a 100-point scale is tight. driftPerSong 0.8 means the writer’s Voice has moved 0.8 points per song over the 2-year catalog (essentially flat). The fingerprint API surfaced a quantitative answer the writer couldn’t derive by reading 30 songs.
Pipeline
Lessons
- Visual "feels inconsistent" is unreliable above n=10. The fingerprint API quantifies what the human ear can’t.
- Consistency 81 (identifiable) is the floor for what most writers think of as "having a voice." The Hank Williams S-band reference (corpus-003) sits at consistency 95+ on a comparable corpus.
- driftPerSong is the metric to watch for writers who suspect they’re drifting — a positive trend over 30+ songs is the signal that the voice is evolving rather than dissipating.
- Per-song Voice score range was 58-89; that’s noise around the mean, not signal. The mean + stdDev tell the real story.
Want a case run on your own lyric?
Email support@songforgeai.com with your lyric + the hypothesis you want tested. Selected cases ship as their own public entry under the same strict format.
Or run it yourself in the forge