Skip to main content

10 posts tagged with "JavaScript"

View All Tags

Land ahoy: leaving the Sea of Nodes

· 31 min read
Darius Mercadier

V8’s end-tier optimizing compiler, Turbofan, is famously one of the few large-scale production compilers to use Sea of Nodes (SoN). However, since almost 3 years ago, we’ve started to get rid of Sea of Nodes and fall back to a more traditional Control-Flow Graph (CFG) Intermediate Representation (IR), which we named Turboshaft. By now, the whole JavaScript backend of Turbofan uses Turboshaft instead, and WebAssembly uses Turboshaft throughout its whole pipeline. Two parts of Turbofan still use some Sea of Nodes: the builtin pipeline, which we’re slowly replacing by Turboshaft, and the frontend of the JavaScript pipeline, which we’re replacing by Maglev, another CFG-based IR. This blog post explains the reasons that led us to move away from Sea of Nodes.

Turbocharging V8 with mutable heap numbers

· 6 min read
[Victor Gomes](https://twitter.com/VictorBFG), the bit shifter

At V8, we're constantly striving to improve JavaScript performance. As part of this effort, we recently revisited the JetStream2 benchmark suite to eliminate performance cliffs. This post details a specific optimization we made that yielded a significant 2.5x improvement in the async-fs benchmark, contributing to a noticeable boost in the overall score. The optimization was inspired by the benchmark, but such patterns do appear in real-world code.

Static Roots: Objects with Compile-Time Constant Addresses

· 5 min read
Olivier Flückiger

Did you ever wonder where undefined, true, and other core JavaScript objects come from? These objects are the atoms of any user defined object and need to be there first. V8 calls them immovable immutable roots and they live in their own heap – the read-only heap. Since they are used constantly, quick access is crucial. And what could be quicker than correctly guessing their memory address at compile time?

V8 is Faster and Safer than Ever!

· 7 min read
[Victor Gomes](https://twitter.com/VictorBFG), the Glühwein expert

Welcome to the thrilling world of V8, where speed is not just a feature but a way of life. As we bid farewell to 2023, it's time to celebrate the impressive accomplishments V8 has achieved this year.

Through innovative performance optimizations, V8 continues to push the boundaries of what's possible in the ever-evolving landscape of the Web. We introduced a new mid-tier compiler and implemented several improvements to the top-tier compiler infrastructure, the runtime and the garbage collector, which have resulted in significant speed gains across the board.

Maglev - V8’s Fastest Optimizing JIT

· 15 min read
[Toon Verwaest](https://twitter.com/tverwaes), [Leszek Swirski](https://twitter.com/leszekswirski), [Victor Gomes](https://twitter.com/VictorBFG), Olivier Flückiger, Darius Mercadier, and Camillo Bruni — not enough cooks to spoil the broth

In Chrome M117 we introduced a new optimizing compiler: Maglev. Maglev sits between our existing Sparkplug and TurboFan compilers, and fills the role of a fast optimizing compiler that generates good enough code, fast enough.

Until 2021 V8 had two main execution tiers: Ignition, the interpreter; and TurboFan, V8’s optimizing compiler focused on peak performance. All JavaScript code is first compiled to ignition bytecode, and executed by interpreting it. During execution V8 tracks how the program behaves, including tracking object shapes and types. Both the runtime execution metadata and bytecode are fed into the optimizing compiler to generate high-performance, often speculative, machine code that runs significantly faster than the interpreter can.

Short builtin calls

· 5 min read
[Toon Verwaest](https://twitter.com/tverwaes), The Big Short

In V8 v9.1 we’ve temporarily disabled embedded builtins on desktop. While embedding builtins significantly improves memory usage, we’ve realized that function calls between embedded builtins and JIT compiled code can come at a considerable performance penalty. This cost depends on the microarchitecture of the CPU. In this post we’ll explain why this is happening, what the performance looks like, and what we’re planning to do to resolve this long-term.

Super fast `super` property access

· 7 min read
[Marja Hölttä](https://twitter.com/marjakh), super optimizer

The super keyword can be used for accessing properties and functions on an object’s parent.

Previously, accessing a super property (like super.x) was implemented via a runtime call. Starting from V8 v9.0, we reuse the inline cache (IC) system in non-optimized code and generate the proper optimized code for super property access, without having to jump to the runtime.

Up to 4GB of memory in WebAssembly

· 8 min read
Andreas Haas, Jakob Kummerow, and Alon Zakai

Introduction

Thanks to recent work in Chrome and Emscripten, you can now use up to 4GB of memory in WebAssembly applications. That’s up from the previous limit of 2GB. It might seem odd that there was ever a limit - after all, no work was needed to allow people to use 512MB or 1GB of memory! - but it turns out that there are some special things happening in the jump from 2GB to 4GB, both in the browser and in the toolchain, which we’ll describe in this post.