Skip to content

Take desktop screenshot with Electron

I am using Electron to create a Windows application that creates a fullscreen transparent overlay window. The purpose of this overlay is to:

  1. take a screenshot of the entire screen (not the overlay itself which is transparent, but the screen ‘underneath’),
  2. process this image by sending the image as a byte stream to my python server, and
  3. draw some things on the overlay

I am getting stuck on the first step, which is the screenshot capturing process.

I tried option 1, which is to use capturePage():

    .then((img: Electron.NativeImage) => { ... }

but this captures my overlay window only (and not the desktop screen). This will be a blank image which is useless to me.

Option 2 is to use desktopCapturer:

this.electronService.remote.desktopCapturer.getSources({types: ['screen']}).then(sources => {
    for (const source of sources) {
        if ( === 'Screen 1') {
            try {
                const mediaDevices = navigator.mediaDevices as any;
                    audio: false,
                    video: { // this specification is only available for Chrome -> Electron runs on Chromium browser
                        mandatory: {
                            chromeMediaSource: 'desktop',
                            minWidth: 1280,
                            maxWidth: 1280,
                            minHeight: 720,
                            maxHeight: 720
                }).then((stream: MediaStream) => { // stream.getVideoTracks()[0] contains the video track I need
            } catch (e) {

The next step is where it becomes fuzzy for me. What do I do with the acquired MediaStream to get a bytestream from the screenshot out of it? I see plenty of examples how to display this stream on a webpage, but I wish to send it to my backend. This StackOverflow post mentions how to do it, but I am not getting it to work properly. This is how I implemented handleStream():

import * as MediaStreamRecorder from 'msr';

private handleStream(stream: MediaStream): void {
    const recorder = new MediaStreamRecorder(stream);
    recorder.ondataavailable = (blob: Blob) => { // I immediately get a blob, while the linked SO page got an event and had to get a blob through<Result>('http://localhost:5050', blob);

    // make data available event fire every one second

The blob is not being accepted by the Python server. Upon inspecting the contents of Blob, it’s a video as I suspected. I verified this with the following code:

let url = URL.createObjectURL(blob);, '_blank')

which opens the blob in a new window. It displays a video of maybe half a second, but I want to have a static image. So how do I get a specific snapshot out of it? I’m also not sure if simply sending the Javascript blob format in the POST body will do for Python to be correctly interpret it. In Java it works by simply sending a byte[] of the image so I verified that the Python server implementation works as expected.

Any suggestions other than using the desktopCapturer are also fine. This implementation is capturing my mouse as well, which I rather not have. I must admit that I did not expect this feature to be so difficult to implement.


desktopCapturer only takes videos. So you need to get a single frame from it. You can use html5 canvas for that. Here is an example:

Or, use some third party screenshot library available on npm. The one I found needs to have ImageMagick installed on linux, but maybe there are more, or you don’t need to support linux. You’ll need to do that in the main electron process in which you can do anything that you can do in node.