Skip to content
Advertisement

In V8, what is lazy deoptimization, and how does it happen?

According to V8 source code and turbofan materials, there is a type of deoptimization called lazy deoptimization which is described as follows(v8/src/common/globals.h):

Lazy: the code has been marked as dependent on some assumption which is checked elsewhere and can trigger deoptimization the next time the code is executed.

However, when observing the execution of ‘v8/test/mjsunit/compiler/deopt-lazy-shape-mutation.js‘ with d8, I found that deoptimization immediately occurred when returned from the function change_o. I guess that it is because the map dependence on o of f is undermined by executing the change_o which manipulates the shape of o.

> d8/d8 --trace-deopt --allow-natives-syntax test/deopt-lazy-shape-mutation.js
[marking dependent code 0x3d7d00044001 (0x3d7d08293535 <SharedFunctionInfo f>) (opt id 0) for deoptimization, reason: code dependencies]
[bailout (kind: deopt-lazy, reason: (unknown)): begin. deoptimizing 0x3d7d08293779 <JSFunction f (sfi = 0x3d7d08293535)>, opt id 0, node id 20, bytecode offset 4, deopt exit 0, FP to SP delta 32, caller SP 0x7ffdaa56ff68, pc 0x3d7d00044111]

My questions are:

  1. What exactly is lazy deoptimization? In the example above, is it ok to understand the reason why f was deoptimized as soon as it is returned from change_o is that change_o marks that some assumption of f has been compromised?

  2. How does lazy deoptimization occur? In the case of eager deoptimization, I see that there are nodes named Deoptimize* which explicitly represent the immediate deoptimization condition, and are assembled into machine code using call and conditional jumps such as jnz, ja, etc. However, I cannot figure out how lazy deoptimization kicks into the execution flow. Is there some supervisor who monitors the call-ret operation, and triggers deoptimization when callee compromises the dependency of caller?

Advertisement

Answer

(V8 developer here.)

  1. What exactly is lazy deoptimization?

It’s a “scheduled” deoptimization of a function that currently has one or more activations on the stack, but isn’t the currently executing function (which would own the topmost stack frame, and would perform an “eager deoptimization” if it had to). Deoptimizing implies having to rewrite the stack frame’s contents, which is prohibitively difficult to do for any non-topmost stack frames, so such functions are marked for deoptimization, and will get deoptimized as soon as control returns to them (i.e. when they become the topmost stack frame).

Note that the same function can get deoptimized both eagerly (for its currently executing activation) and lazily (for any additional activations further down in the stack).

In the example above, is it ok to understand the reason why f was deoptimized as soon as it is returned from change_o is that change_o marks that some assumption of f has been compromised?

Yes. change_o invalidates an assumption that has been made when f was optimized earlier. (Any subsequent optimization of f will not make the same assumption.)

  1. How does lazy deoptimization occur?

The return addresses on the stack are rewritten, so instead of resuming execution of the original code, the deoptimization sequence is started. See class ActivationsFinder in deoptimizer.cc if you want to dive into the details.

User contributions licensed under: CC BY-SA
1 People found this is helpful
Advertisement