Google Chrome M137 Brings Speculative Optimizations to WebAssembly, Boosting Performance by Over 50% in Some Cases

By — min read

V8 Introduces Speculative Inlining and Deoptimization for WebAssembly

Google's V8 JavaScript engine has shipped a pair of speculative optimizations for WebAssembly with Chrome M137, significantly accelerating execution—especially for WasmGC programs. On Dart microbenchmarks, the combination of speculative call_indirect inlining and deoptimization yields average speedups exceeding 50%, while larger applications see gains between 1% and 8%.

Google Chrome M137 Brings Speculative Optimizations to WebAssembly, Boosting Performance by Over 50% in Some Cases
Source: v8.dev

“These optimizations allow us to generate better machine code by making assumptions based on runtime feedback,” a V8 engineer told reporters. “That’s particularly important for WasmGC, where richer types benefit from speculation.”

How the Optimizations Work

Speculative inlining replaces indirect function calls with direct, inlined code based on frequently observed call targets. If the assumption later proves wrong, V8 performs a deoptimization (deopt): it discards the optimized code and falls back to unoptimized execution, collecting fresh feedback for future re-optimization.

This approach mirrors long-standing techniques in JavaScript JIT compilation. Until now, WebAssembly didn’t require such speculation because its static typing and ahead-of-time compilation (e.g., from C/C++) already produced efficient code. But the WasmGC extension—which compiles managed languages like Java, Kotlin, and Dart—introduces dynamic features like structs, arrays, and subtyping that benefit from runtime feedback.

“Deoptimizations are also an important building block for further optimizations in the future,” the V8 team noted. The new infrastructure paves the way for more aggressive speculation down the line.

Background: From JavaScript to WebAssembly

Fast execution of JavaScript has long relied on speculative optimizations. For example, given the expression a + b, V8’s JIT compiler may generate optimized integer addition code if past executions showed both operands were integers. If the program later violates that assumption, V8 deoptimizes seamlessly.

WebAssembly 1.0 programs—typically compiled from C, C++, or Rust—were already well optimized due to static typing and ahead-of-time tools like Emscripten and Binaryen. Deopts were unnecessary. But the WasmGC proposal changes the game by supporting higher-level languages that rely on garbage collection and dynamic dispatch.

What This Means for Developers

For developers compiling managed languages to WebAssembly via WasmGC, this update delivers a substantial performance boost without manual tuning. Applications written in Dart, Kotlin, or Java can expect faster execution, especially on code with heavy indirect calls.

Moreover, the deoptimization infrastructure provides a foundation for future speculative techniques. “We see this as a baseline—more optimizations that rely on runtime feedback are now possible,” the V8 engineer added. Developers should watch for further improvements in upcoming Chrome releases.

Tags:

Recommended

Discover More

Modernizing Your React Build Pipeline: From Webpack to ViteUnderstanding the Updated Baseline for NVIDIA GPU Compilation in RustIs the AI Industry's Transformer Obsession Blocking True AGI?Chainguard Forks Abandoned Open Source Projects to Plug Security GapsNicole Saphier: The New Surgeon General Nominee Balances Enthusiasm and Caution for MAHA Movement