Skip to main content

A new way to bring garbage collected programming languages efficiently to WebAssembly

· 27 min read
Alon Zakai

A recent article on WebAssembly Garbage Collection (WasmGC) explains at a high level how the Garbage Collection (GC) proposal aims to better support GC languages in Wasm, which is very important given their popularity. In this article, we will get into the technical details of how GC languages such as Java, Kotlin, Dart, Python, and C# can be ported to Wasm. There are in fact two main approaches:

Control-flow Integrity in V8

· 9 min read
Stephen Röttger

Control-flow integrity (CFI) is a security feature aiming to prevent exploits from hijacking control-flow. The idea is that even if an attacker manages to corrupt the memory of a process, additional integrity checks can prevent them from executing arbitrary code. In this blog post, we want to discuss our work to enable CFI in V8.

Speeding up V8 heap snapshots

· 11 min read
Jose Dapena Paz

This blog post has been authored by José Dapena Paz (Igalia), with contributions from Jason Williams (Bloomberg), Ashley Claymore (Bloomberg), Rob Palmer (Bloomberg), Joyee Cheung (Igalia), and Shu-yu Guo (Google).

In this post about V8 heap snapshots, I will talk about some performance problems found by Bloomberg engineers, and how we fixed them to make JavaScript memory analysis faster than ever.

The problem

Bloomberg engineers were working on diagnosing a memory leak in a JavaScript application. It was failing with Out-Of-Memory errors. For the tested application, the V8 heap limit was configured to be around 1400 MB. Normally V8’s garbage collector should be able to keep the heap usage under that limit, so the failures indicated that there was likely a leak.

WebAssembly tail calls

· 9 min read
Thibaud Michaud, Thomas Lively

We are shipping WebAssembly tail calls in V8 v11.2! In this post we give a brief overview of this proposal, demonstrate an interesting use case for C++ coroutines with Emscripten, and show how V8 handles tail calls internally.

What is Tail Call Optimization?

A call is said to be in tail position if it is the last instruction executed before returning from the current function. Compilers can optimize such calls by discarding the caller frame and replacing the call with a jump.

This is especially useful for recursive functions. For instance, take this C function that sums the elements of a linked list:

int sum(List* list, int acc) {
if (list == nullptr) return acc;
return sum(list->next, acc + list->val);
}

With a regular call, this consumes 𝒪(n) stack space: each element of the list adds a new frame on the call stack. With a long enough list, this could very quickly overflow the stack. By replacing the call with a jump, tail call optimization effectively turns this recursive function into a loop which uses 𝒪(1) stack space:

Pointer compression in Oilpan

· 14 min read
Anton Bikineev, and Michael Lippautz ([@mlippautz](https://twitter.com/mlippautz)), walking disassemblers

It is absolutely idiotic to have 64-bit pointers when I compile a program that uses less than 4 gigabytes of RAM. When such pointer values appear inside a struct, they not only waste half the memory, they effectively throw away half of the cache.

Donald Knuth (2008)

Discontinuing release blog posts

· 3 min read
Shu-yu Guo ([@shu_](https://twitter.com/_shu))

Historically, there has been a blog post for each new release branch of V8. You may have noticed there has not been a release blog post since v9.9. From v10.0 onward, we are discontinuing release blog posts for each new branch. But don’t worry, all the information you were used to getting via release blog posts are still available! Read on to see where to find that information going forward.

Retrofitting temporal memory safety on C++

· 12 min read
Anton Bikineev, Michael Lippautz ([@mlippautz](https://twitter.com/mlippautz)), Hannes Payer ([@PayerHannes](https://twitter.com/PayerHannes))
note

Note: This post was originally posted on the Google Security Blog.

Memory safety in Chrome is an ever-ongoing effort to protect our users. We are constantly experimenting with different technologies to stay ahead of malicious actors. In this spirit, this post is about our journey of using heap scanning technologies to improve memory safety of C++.

Faster initialization of instances with new class features

· 13 min read
[Joyee Cheung](https://twitter.com/JoyeeCheung), instance initializer

Class fields have been shipped in V8 since v7.2 and private class methods have been shipped since v8.4. After the proposals reached stage 4 in 2021, work had begun to improve the support of the new class features in V8 - until then, there had been two main issues affecting their adoption:

V8 release v9.9

· 4 min read
Ingvar Stepanyan ([@RReverser](https://twitter.com/RReverser)), at his 99%

Every four weeks, we create a new branch of V8 as part of our release process. Each version is branched from V8’s Git main immediately before a Chrome Beta milestone. Today we’re pleased to announce our newest branch, V8 version 9.9, which is in beta until its release in coordination with Chrome 99 Stable in several weeks. V8 v9.9 is filled with all sorts of developer-facing goodies. This post provides a preview of some of the highlights in anticipation of the release.

Oilpan library

· 7 min read
Anton Bikineev, Omer Katz ([@omerktz](https://twitter.com/omerktz)), and Michael Lippautz ([@mlippautz](https://twitter.com/mlippautz)), efficient and effective file movers

While the title of this post may suggest taking a deep dive into a collection of books around oil pans – which, considering construction norms for pans, is a topic with a surprising amount of literature – we are instead looking a bit closer at Oilpan, a C++ garbage collector that is hosted through V8 as a library since V8 v9.4.