Skip to main content

Land ahoy: leaving the Sea of Nodes

· 31 min read
Darius Mercadier

V8’s end-tier optimizing compiler, Turbofan, is famously one of the few large-scale production compilers to use Sea of Nodes (SoN). However, since almost 3 years ago, we’ve started to get rid of Sea of Nodes and fall back to a more traditional Control-Flow Graph (CFG) Intermediate Representation (IR), which we named Turboshaft. By now, the whole JavaScript backend of Turbofan uses Turboshaft instead, and WebAssembly uses Turboshaft throughout its whole pipeline. Two parts of Turbofan still use some Sea of Nodes: the builtin pipeline, which we’re slowly replacing by Turboshaft, and the frontend of the JavaScript pipeline, which we’re replacing by Maglev, another CFG-based IR. This blog post explains the reasons that led us to move away from Sea of Nodes.

Turbocharging V8 with mutable heap numbers

· 6 min read
[Victor Gomes](https://twitter.com/VictorBFG), the bit shifter

At V8, we're constantly striving to improve JavaScript performance. As part of this effort, we recently revisited the JetStream2 benchmark suite to eliminate performance cliffs. This post details a specific optimization we made that yielded a significant 2.5x improvement in the async-fs benchmark, contributing to a noticeable boost in the overall score. The optimization was inspired by the benchmark, but such patterns do appear in real-world code.

WebAssembly JSPI has a new API

· 7 min read
Francis McCabe, Thibaud Michaud, Ilya Rezvov, Brendan Dahl

WebAssembly’s JavaScript Promise Integration (JSPI) API has a new API, available in Chrome release M126. We talk about what has changed, how to use it with Emscripten, and what is the roadmap for JSPI.

JSPI is an API that allows WebAssembly applications that use sequential APIs to access Web APIs that are asynchronous. Many Web APIs are crafted in terms of JavaScript Promise objects: instead of immediately performing the requested operation, they return a Promise to do so. On the other hand, many applications compiled to WebAssembly come from the C/C++ universe, which is dominated by APIs that block the caller until they are completed.

The V8 Sandbox

· 14 min read
Samuel Groß

After almost three years since the initial design document and hundreds of CLs in the meantime, the V8 Sandbox — a lightweight, in-process sandbox for V8 — has now progressed to the point where it is no longer considered an experimental security feature. Starting today, the V8 Sandbox is included in Chrome's Vulnerability Reward Program (VRP). While there are still a number of issues to resolve before it becomes a strong security boundary, the VRP inclusion is an important step in that direction. Chrome 123 could therefore be considered to be a sort of "beta" release for the sandbox. This blog post uses this opportunity to discuss the motivation behind the sandbox, show how it prevents memory corruption in V8 from spreading within the host process, and ultimately explain why it is a necessary step towards memory safety.

WebAssembly JSPI is going to origin trial

· 4 min read
Francis McCabe, Thibaud Michaud, Ilya Rezvov, Brendan Dahl

WebAssembly’s JavaScript Promise Integration (JSPI) API is entering an origin trial, with Chrome release M123. What that means is that you can test whether you and your users can benefit from this new API.

JSPI is an API that allows so-called sequential code – that has been compiled to WebAssembly – to access Web APIs that are asynchronous. Many Web APIs are crafted in terms of JavaScript Promises: instead of immediately performing the requested operation they return a Promise to do so. When the action is finally performed, the browser’s task runner invokes any callbacks with the Promise. JSPI hooks into this architecture to allow a WebAssembly application to be suspended when the Promise is returned and resumed when the Promise is resolved.

Static Roots: Objects with Compile-Time Constant Addresses

· 5 min read
Olivier Flückiger

Did you ever wonder where undefined, true, and other core JavaScript objects come from? These objects are the atoms of any user defined object and need to be there first. V8 calls them immovable immutable roots and they live in their own heap – the read-only heap. Since they are used constantly, quick access is crucial. And what could be quicker than correctly guessing their memory address at compile time?

V8 is Faster and Safer than Ever!

· 7 min read
[Victor Gomes](https://twitter.com/VictorBFG), the Glühwein expert

Welcome to the thrilling world of V8, where speed is not just a feature but a way of life. As we bid farewell to 2023, it's time to celebrate the impressive accomplishments V8 has achieved this year.

Through innovative performance optimizations, V8 continues to push the boundaries of what's possible in the ever-evolving landscape of the Web. We introduced a new mid-tier compiler and implemented several improvements to the top-tier compiler infrastructure, the runtime and the garbage collector, which have resulted in significant speed gains across the board.

Maglev - V8’s Fastest Optimizing JIT

· 15 min read
[Toon Verwaest](https://twitter.com/tverwaes), [Leszek Swirski](https://twitter.com/leszekswirski), [Victor Gomes](https://twitter.com/VictorBFG), Olivier Flückiger, Darius Mercadier, and Camillo Bruni — not enough cooks to spoil the broth

In Chrome M117 we introduced a new optimizing compiler: Maglev. Maglev sits between our existing Sparkplug and TurboFan compilers, and fills the role of a fast optimizing compiler that generates good enough code, fast enough.

Until 2021 V8 had two main execution tiers: Ignition, the interpreter; and TurboFan, V8’s optimizing compiler focused on peak performance. All JavaScript code is first compiled to ignition bytecode, and executed by interpreting it. During execution V8 tracks how the program behaves, including tracking object shapes and types. Both the runtime execution metadata and bytecode are fed into the optimizing compiler to generate high-performance, often speculative, machine code that runs significantly faster than the interpreter can.