Skip to main content

17 posts tagged with "memory"

View All Tags

Speeding up V8 heap snapshots

· 11 min read
Jose Dapena Paz

This blog post has been authored by José Dapena Paz (Igalia), with contributions from Jason Williams (Bloomberg), Ashley Claymore (Bloomberg), Rob Palmer (Bloomberg), Joyee Cheung (Igalia), and Shu-yu Guo (Google).

In this post about V8 heap snapshots, I will talk about some performance problems found by Bloomberg engineers, and how we fixed them to make JavaScript memory analysis faster than ever.

The problem

Bloomberg engineers were working on diagnosing a memory leak in a JavaScript application. It was failing with Out-Of-Memory errors. For the tested application, the V8 heap limit was configured to be around 1400 MB. Normally V8’s garbage collector should be able to keep the heap usage under that limit, so the failures indicated that there was likely a leak.

Pointer compression in Oilpan

· 14 min read
Anton Bikineev, and Michael Lippautz ([@mlippautz](https://twitter.com/mlippautz)), walking disassemblers

It is absolutely idiotic to have 64-bit pointers when I compile a program that uses less than 4 gigabytes of RAM. When such pointer values appear inside a struct, they not only waste half the memory, they effectively throw away half of the cache.

Donald Knuth (2008)

Retrofitting temporal memory safety on C++

· 12 min read
Anton Bikineev, Michael Lippautz ([@mlippautz](https://twitter.com/mlippautz)), Hannes Payer ([@PayerHannes](https://twitter.com/PayerHannes))
note

Note: This post was originally posted on the Google Security Blog.

Memory safety in Chrome is an ever-ongoing effort to protect our users. We are constantly experimenting with different technologies to stay ahead of malicious actors. In this spirit, this post is about our journey of using heap scanning technologies to improve memory safety of C++.

Oilpan library

· 7 min read
Anton Bikineev, Omer Katz ([@omerktz](https://twitter.com/omerktz)), and Michael Lippautz ([@mlippautz](https://twitter.com/mlippautz)), efficient and effective file movers

While the title of this post may suggest taking a deep dive into a collection of books around oil pans – which, considering construction norms for pans, is a topic with a surprising amount of literature – we are instead looking a bit closer at Oilpan, a C++ garbage collector that is hosted through V8 as a library since V8 v9.4.

High-performance garbage collection for C++

· 10 min read
Anton Bikineev, Omer Katz ([@omerktz](https://twitter.com/omerktz)), and Michael Lippautz ([@mlippautz](https://twitter.com/mlippautz)), C++ memory whisperers

In the past we have already been writing about garbage collection for JavaScript, the document object model (DOM), and how all of this is implemented and optimized in V8. Not everything in Chromium is JavaScript though, as most of the browser and its Blink rendering engine where V8 is embedded are written in C++. JavaScript can be used to interact with the DOM that is then processed by the rendering pipeline.

Pointer Compression in V8

· 22 min read
Igor Sheludko and Santiago Aboy Solanes, *the* pointer compressors

There is a constant battle between memory and performance. As users, we would like things to be fast as well as consume as little memory as possible. Unfortunately, usually improving performance comes at a cost of memory consumption (and vice versa).

A lighter V8

· 12 min read
Mythri Alle, Dan Elphick, and [Ross McIlroy](https://twitter.com/rossmcilroy), V8 weight-watchers

In late 2018 we started a project called V8 Lite, aimed at dramatically reducing V8’s memory usage. Initially this project was envisioned as a separate Lite mode of V8 specifically aimed at low-memory mobile devices or embedder use-cases that care more about reduced memory usage than throughput execution speed. However, in the process of this work, we realized that many of the memory optimizations we had made for this Lite mode could be brought over to regular V8 thereby benefiting all users of V8.

Trash talk: the Orinoco garbage collector

· 13 min read
Peter ‘the garbo’ Marshall ([@hooraybuffer](https://twitter.com/hooraybuffer))

Over the past years the V8 garbage collector (GC) has changed a lot. The Orinoco project has taken a sequential, stop-the-world garbage collector and transformed it into a mostly parallel and concurrent collector with incremental fallback.

Concurrent marking in V8

· 13 min read
Ulan Degenbaev, Michael Lippautz, and Hannes Payer — main thread liberators

This post describes the garbage collection technique called concurrent marking. The optimization allows a JavaScript application to continue execution while the garbage collector scans the heap to find and mark live objects. Our benchmarks show that concurrent marking reduces the time spent marking on the main thread by 60%–70%. Concurrent marking is the last puzzle piece of the Orinoco project — the project to incrementally replace the old garbage collector with the new mostly concurrent and parallel garbage collector. Concurrent marking is enabled by default in Chrome 64 and Node.js v10.

Tracing from JS to the DOM and back again

· 5 min read
Ulan Degenbaev, Alexei Filippov, Michael Lippautz, and Hannes Payer — the fellowship of the DOM

Debugging memory leaks in Chrome 66 just became much easier. Chrome’s DevTools can now trace and snapshot C++ DOM objects and display all reachable DOM objects from JavaScript with their references. This feature is one of the benefits of the new C++ tracing mechanism of the V8 garbage collector.