I’m using Pino. I’m trying to encrypt the log stream and write it to a file. One way I can achieve this is creating a pipeline where I can transform the data and encrypt its contents, like so (works fine):
import build from "pino-abstract-transport"; import { pipeline, Transform } from "node:stream"; import crypto from "node:crypto"; const ALGORITHM = "aes-256-ctr"; export default async function (options: { password: string }) { const password = Buffer.from(options.password, "hex"); const iv = crypto.randomBytes(16); return build(function (source) { const myTransportStream = new Transform({ autoDestroy: true, objectMode: true, transform(chunk, _enc, end) { const encrypt = crypto.createCipheriv(ALGORITHM, password, iv); const data = encrypt.update(JSON.stringify(chunk)); const encrypted = Buffer.concat([data, encrypt.final()]); this.push(encrypted.toString("hex") + 'n'); end(); }, }); pipeline(source, myTransportStream, () => {}); return myTransportStream; }, { enablePipelining: true, }); }
How can I reuse the same const encrypt = crypto.createCipheriv(ALGORITHM, password, iv);
instance, so not to create a new one every time? Do I gain some performance by doing this refactoring?
I tried this:
import build from "pino-abstract-transport"; import { pipeline, Transform } from "node:stream"; import crypto from "node:crypto"; const ALGORITHM = "aes-256-ctr"; export default async function (options: { password: string }) { let initiated = false; const password = Buffer.from(options.password, "hex"); const iv = crypto.randomBytes(16); const encrypt = crypto.createCipheriv(ALGORITHM, password, iv); return build(function (source) { const myTransportStream = new Transform({ autoDestroy: true, objectMode: true, transform(chunk, _enc, end) { if (!initiated) { initiated = true; this.push(Buffer.concat([iv, chunk])); } else { this.push(chunk); } end(); }, }); pipeline(source, encrypt, myTransportStream, () => {}); return myTransportStream; }, { enablePipelining: true, }); }
But I get:
TypeError [ERR_INVALID_ARG_TYPE]: The "chunk" argument must be of type string or an instance of Buffer or Uint8Array. Received an instance of Object at new NodeError (node:internal/errors:372:5) at _write (node:internal/streams/writable:312:13) at Cipheriv.Writable.write (node:internal/streams/writable:334:10) at Transform.ondata (node:internal/streams/readable:754:22) at Transform.emit (node:events:527:28) at addChunk (node:internal/streams/readable:315:12) at readableAddChunk (node:internal/streams/readable:289:9) at Transform.Readable.push (node:internal/streams/readable:228:10) at push (/mnt/spare/ent/back/Plugin-Stix-Core-API/node_modules/split2/index.js:76:10) at Transform.transform [as _transform] (/mnt/spare/ent/back/Plugin-Stix-Core-API/node_modules/split2/index.js:44:7) Emitted 'error' event on ThreadStream instance at:
No worries about micro-optimization, but if you can and want to point out/raise arguments, feel free to do so.
I’m using fastify’s Pino configuration, which is basically the same configuration from vanilla Pino package:
transport: { pipeline: [ { target: "./transform-log.js", options: { password: "f8647d5417039b42c88a75897109049378cdfce528a7e015656bd23cd18fb78a", }, }, { target: "pino/file", options: { destination: file, }, }, ], },
Advertisement
Answer
How can I reuse the same
const encrypt = crypto.createCipheriv(ALGORITHM, password, iv);
instance, so not to create a new one every time?
You can and should not. As you need to indicate the IV it should be clear that the function is not made for reuse, as the IV should be different for each message.
this.push(Buffer.concat([iv, chunk]));
This is most certainly wrong. You could get away with this for CBC mode, but in CTR mode each bit/byte of plaintext is XOR’ed with the internally created key stream. So each bit/byte of plaintext / ciphertext is fully independent. This means that a prefixed IV won’t do anything when it comes to the generated ciphertext for the chunk
.
Do I gain some performance by doing this refactoring?
No, that’s unlikely and unlikely to be worth the effort.
It may increase performance slightly as there is – theoretically – no second subkey derivation for AES. That is however a very lightweight operation. It could also reuse some buffers, but AES doesn’t require large buffers either.
The IV needs to be unique and therefore you should create a fresh one each run of the created cipher instance. It may be more performant for e.g. CTR mode to use some kind of sequence number instead of a random. Beware though that any collision will almost completely destroy any confidentiality that you want to achieve.
I’d strongly advice you to only retain the key between operations.
You can store a large synchronized counter as well to act as a nonce, but beware that keeping such a counter synced is quite a tricky problem. As such a nonce is generally part of a protocol, e.g. a message counter, it would still be better to supply the IV when starting encryption, i.e. as a parameter rather than a field.
All this is unnecessary for a random IV, which can simply be included with – e.g. prefixed to – the ciphertext and extracted before decryption.