Skip to content
Advertisement

Canvas API implementation

I recently started to learn a bit about how javascript work under the hood, and came to know that (in the context of chrome) v8 engine and web APIs are different. Here’s some questions I have regarding the canvas API specifically:

  1. Why do we need to use getImageData() every time we want to access the pixels of a given canvas? Since canvas is pixel-based, shouldn’t there be a pixel array that the canvas API manipulates every time you draw on it, which would make it statically available?
  2. Is there a way to understand how specific APIs are implemented? For instance—how is ctx.fillRect() done internally? I tried doing it manually by changing specific pixel colors, but it turns out to be drastically slower. Maybe it is because I am doing it in javascript and it is normally done internally in c++? Is there no way of finding out since the implmentation is in their own source codes?

I might be confusing a lot of concepts since I still don’t really understand how web API or v8 works, so any clarification is appreciated.

Advertisement

Answer

Why do we need to use getImageData() every time we want to access the pixels of a given canvas? Since canvas is pixel-based, shouldn’t there be a pixel array that the canvas API manipulates every time you draw on it, which would make it statically available?

You are right, it could have been made this way, there is even active discussions about giving direct access to the pixel buffer, allowing for zero-copy read and write operations.
However in the original design, it was thought that it would be necessary to completely detach the pixel buffer from the current context execution. This allowed for instance to have GPU based implementations, where all the drawings are performed by the GPU, and where the backing buffer is stored in the GPU’s memory, thus not accessible to scripts.
Also to be noted that most implementations use double buffering, swapping between a front-buffer and a back-buffer, to avoid tearing.

Is there a way to understand how specific APIs are implemented? For instance—how is ctx.fillRect() done internally?

You can always try to navigate through the sources, Chrome has the handy https://source.chromium.org/, Firefox has https://searchfox.org However for the Canvas 2D API, the things are a bit complex as to where to really look. Each browser does have at least one rendering engine, in this engine will live all the API wrappers, which will then make calls to yet another graphic engine, which will generate the graphics.
In Chromium based browsers, the rendering engine is called Blink, and the graphic engine Skia, in Safari they use WebKit (from which Blink got forked) and Core Graphics, and in Firefox, IIRC Gecko uses various rendering and graphic engines, based on the platform (Cairo, Core Graphics or Skia), so looking where the actual graphic operation is done in this browser is not that easy.
And to add some fun, all these graphic engines will support both a “software-rendering” (CPU) path, or an “hardware-accelerated” (GPU) one.

But to help you get started in your journey, blink’s implementation of fillRect() starts around here: https://source.chromium.org/chromium/chromium/src/+/main:third_party/blink/renderer/modules/canvas/canvas2d/base_rendering_context_2d.cc;l=1075


Nota bene: the JavaScript engine (e.g v8) has very little to do in all this matter.

User contributions licensed under: CC BY-SA
4 People found this is helpful
Advertisement