The history of JavaScript benchmarks is a story of constant evolution. As the web expanded from simple documents to dynamic client-side applications, new JavaScript benchmarks were created to measure workloads that became important for new use cases. This constant change has given individual benchmarks finite lifespans. As web browser and virtual machine (VM) implementations begin to over-optimize for specific test cases, benchmarks themselves cease to become effective proxies for their original use cases. One of the first JavaScript benchmarks, SunSpider, provided early incentives for shipping fast optimizing compilers. However, as VM engineers uncovered the limitations of microbenchmarks and found new ways to optimize around SunSpider’s limitations, the browser community retired SunSpider as a recommended benchmark.