How I built an index of my blog posts on my GitHub profile page

December 13, 2024
1 comment Node, GitHub

If you go to https://github.com/peterbe it lists the most recent blog posts here on my blog. The page is rebuilt every hour using GitHub Actions. This blog post is about how I built that, so that you can build something just like it.

In case you don't have access or it's quicker to look at a picture, this is what it looks like:

List of blog posts

The way GitHub profiles work is you create a GitHub repo that is in the same name as your username. In my case, my username is peterbe, and the repo is thus called peterbe. So, it's named https://github.com/peterbe/peterbe. It has to be a public repo for this to work.

In that repo you have a README.md and mine looks like this: https://github.com/peterbe/peterbe/blob/main/README.md?plain=1 If you look carefully, the Markdown in that README.md contains:


<!-- blog posts -->
...
<!-- /blog posts -->

By default, HTML comments work in GitHub-flavored Markdown just like they do in HTML.

Then, I have a Node script that finds that inside the file and replaces its content with a list of Markdown links.

Truncated! Read the rest by clicking the link below.

Run TypeScript in Node without extensions

December 10, 2024
0 comments Node, JavaScript

A couple of months ago I wrote about "Node watch mode and TypeScript" which suggests using the @swc-node/register package to be able to run node on .ts file without first converting it to .js using tsc.

In Node 22, you don't even need @swc-node/register. You can use --experimental-strip-types instead. (Note the prefix "experimental"). See release notes here.

Imagine a script like this:


// example.ts

function c2f(c: number) {
  return (c * 9) / 5 + 32;
}
console.log(c2f(123));

you can execute it directly in Node v22 using:


❯ node --experimental-strip-types --no-warnings example.ts
253.4

Truncated! Read the rest by clicking the link below.

How I make my Vite dev server experience faster

October 22, 2024
0 comments React, Node, JavaScript

I have a web app that operates as a SPA (Single Page App). It's built on regular Vite + React + TypeScript. In production you just host the built static assets, but in development (for hot module reloading and stuff) I use the built-in dev server in Vite. This is what you get when you type vite without any other command line arguments.

But here's what happens, you're in the terminal and you type npm run dev and hit Enter. Then, using your trackpad or keyboard shortcuts you switch over to a browser tab and go to http://localhost:5173/. When you get there to the first request, Vite starts at the entry point, which is src/main.tsx and from there, it looks at its imports and starts transpiling the files needed. You can see what happens with vite --debug transform.

With debug:

Truncated! Read the rest by clicking the link below.

Wouter + Vite is the new create-react-app, and I love it

August 16, 2024
0 comments React, Node, Bun

If you've done React for a while, you most likely remember Create React App. It was/is a prepared config that combines React with webpack, and eslint. Essentially, you get immediate access to making apps with React in a local dev server and it produces a complete build artefact that you can upload to a web server and host your SPA (Single Page App). I loved it and blogged much about it in distant past.

The create-react-app project died, and what came onto the scene was tools that solved React rendering configs with SSR (Server Side Rendering). In particular, we now have great frameworks like Gatsby, Next.js, Remix, and Astro. They're great, especially if you want to use server-side rendering with code-splitting by route and that sweet TypeScript integration between your server (fs, databases, secrets) and your rendering components.

However, I still think there is a place for a super light and simple SPA tool that only adds routing, hot module reloading, and build artefacts. For that, I love Vite + Wouter. At least for now :)
What's so great about it? Speed

Truncated! Read the rest by clicking the link below.

Converting Celsius to Fahrenheit round-up

July 22, 2024
0 comments Go, Node, Python, Bun, Ruby, Rust, JavaScript

In the last couple of days, I've created variations of a simple algorithm to demonstrate how Celcius and Fahrenheit seem to relate to each other if you "mirror the number".
It wasn't supposed to be about the programming language. Still, I used Python in the first one and I noticed that since the code is simple, it could be fun to write variants of it in other languages.

  1. Converting Celsius to Fahrenheit with Python
  2. Converting Celsius to Fahrenheit with TypeScript
  3. Converting Celsius to Fahrenheit with Go
  4. Converting Celsius to Fahrenheit with Ruby
  5. Converting Celsius to Fahrenheit with Crystal
  6. Converting Celsius to Fahrenheit with Rust

It was a fun exercise.

Truncated! Read the rest by clicking the link below.

Node watch mode and TypeScript

July 21, 2024
0 comments Node, JavaScript

UPDATE

See "Run TypeScript in Node without extensions" as of Dec 10, 2024. (5 months later)


You might have heard that Node now has watch mode. It watches the files you're saving and re-runs the node command automatically. Example:


// example.js

function c2f(c) {
  return (c * 9) / 5 + 32;
}
console.log(c2f(0));

Now, run it like this:

❯ node --watch example.js
32
Completed running 'example.js'

Edit that example.js and the terminal will look like this:

Restarting 'example.js'
32
Completed running 'example.js'

(even if the file didn't change. I.e. you just hit Cmd-S to save)

Truncated! Read the rest by clicking the link below.

How slow is Node to Brotli decompress a file compared to not having to decompress?

January 19, 2024
3 comments Node, macOS, Linux

tl;dr; Not very slow.

At work, we have some very large .json that get included in a Docker image. The Node server then opens these files at runtime and displays certain data from that. To make the Docker image not too large, we compress these .json files at build-time. We compress the .json files with Brotli to make a .json.br file. Then, in the Node server code, we read them in and decompress them at runtime. It looks something like this:


export function readCompressedJsonFile(xpath) {
  return JSON.parse(brotliDecompressSync(fs.readFileSync(xpath)))
}

The advantage of compressing them first, at build time, which is GitHub Actions, is that the Docker image becomes smaller which is advantageous when shipping that image to a registry and asking Azure App Service to deploy it. But I was wondering, is this a smart trade-off? In a sense, why compromise on runtime (which faces users) to save time and resources at build-time, which is mostly done away from the eyes of users? The question was; how much overhead is it to have to decompress the files after its data has been read from disk to memory?

The benchmark

The files I test with are as follows:

ls -lh pageinfo*
-rw-r--r--  1 peterbe  staff   2.5M Jan 19 08:48 pageinfo-en-ja-es.json
-rw-r--r--  1 peterbe  staff   293K Jan 19 08:48 pageinfo-en-ja-es.json.br
-rw-r--r--  1 peterbe  staff   805K Jan 19 08:48 pageinfo-en.json
-rw-r--r--  1 peterbe  staff   100K Jan 19 08:48 pageinfo-en.json.br

There are 2 groups:

  1. Only English (en)
  2. 3 times larger because it has English, Japanese, and Spanish

And for each file, you can see the effect of having compressed them with Brotli.

  1. The smaller JSON file compresses 8x
  2. The larger JSON file compresses 9x

Here's the benchmark code:


import fs from "fs";
import { brotliDecompressSync } from "zlib";
import { Bench } from "tinybench";

const JSON_FILE = "pageinfo-en.json";
const BROTLI_JSON_FILE = "pageinfo-en.json.br";
const LARGE_JSON_FILE = "pageinfo-en-ja-es.json";
const BROTLI_LARGE_JSON_FILE = "pageinfo-en-ja-es.json.br";

function f1() {
  const data = fs.readFileSync(JSON_FILE, "utf8");
  return Object.keys(JSON.parse(data)).length;
}

function f2() {
  const data = brotliDecompressSync(fs.readFileSync(BROTLI_JSON_FILE));
  return Object.keys(JSON.parse(data)).length;
}

function f3() {
  const data = fs.readFileSync(LARGE_JSON_FILE, "utf8");
  return Object.keys(JSON.parse(data)).length;
}

function f4() {
  const data = brotliDecompressSync(fs.readFileSync(BROTLI_LARGE_JSON_FILE));
  return Object.keys(JSON.parse(data)).length;
}

console.assert(f1() === 2633);
console.assert(f2() === 2633);
console.assert(f3() === 7767);
console.assert(f4() === 7767);

const bench = new Bench({ time: 100 });
bench.add("f1", f1).add("f2", f2).add("f3", f3).add("f4", f4);
await bench.warmup(); // make results more reliable, ref: https://github.com/tinylibs/tinybench/pull/50
await bench.run();

console.table(bench.table());

Here's the output from tinybench:

┌─────────┬───────────┬─────────┬────────────────────┬──────────┬─────────┐
│ (index) │ Task Name │ ops/sec │ Average Time (ns)  │  Margin  │ Samples │
├─────────┼───────────┼─────────┼────────────────────┼──────────┼─────────┤
│    0    │   'f1'    │  '179'  │  5563384.55941942  │ '±6.23%' │   18    │
│    1    │   'f2'    │  '150'  │ 6627033.621072769  │ '±7.56%' │   16    │
│    2    │   'f3'    │  '50'   │ 19906517.219543457 │ '±3.61%' │   10    │
│    3    │   'f4'    │  '44'   │ 22339166.87965393  │ '±3.43%' │   10    │
└─────────┴───────────┴─────────┴────────────────────┴──────────┴─────────┘

Note, this benchmark is done on my 2019 Intel MacBook Pro. This disk is not what we get from the Apline Docker image (running inside Azure App Service). To test that would be a different story. But, at least we can test it in Docker locally.

I created a Dockerfile that contains...

ARG NODE_VERSION=20.10.0

FROM node:${NODE_VERSION}-alpine

and run the same benchmark in there by running docker composite up --build. The results are:

┌─────────┬───────────┬─────────┬────────────────────┬──────────┬─────────┐
│ (index) │ Task Name │ ops/sec │ Average Time (ns)  │  Margin  │ Samples │
├─────────┼───────────┼─────────┼────────────────────┼──────────┼─────────┤
│    0    │   'f1'    │  '151'  │ 6602581.124978315  │ '±1.98%' │   16    │
│    1    │   'f2'    │  '112'  │  8890548.4166656   │ '±7.42%' │   12    │
│    2    │   'f3'    │  '44'   │ 22561206.40002191  │ '±1.95%' │   10    │
│    3    │   'f4'    │  '37'   │ 26979896.599974018 │ '±1.07%' │   10    │
└─────────┴───────────┴─────────┴────────────────────┴──────────┴─────────┘

Analysis/Conclusion

First, focussing on the smaller file: Processing the .json is 25% faster than the .json.br file

Then, the larger file: Processing the .json is 16% faster than the .json.br file

So that's what we're paying for a smaller Docker image. Depending on the size of the .json file, your app runs ~20% slower at this operation. But remember, as a file on disk (in the Docker image), it's ~8x smaller.

I think, in conclusion: It's a small price to pay. It's worth doing. Your context depends.
Keep in mind the numbers there to process that 300KB pageinfo-en-ja-es.json.br file, it was able to do that 37 times in one second. That means it took 27 milliseconds to process that file!

The caveats

To repeat, what was mentioned above: This was run in my Intel MacBook Pro. It's likely to behave differently in a real Docker image running inside Azure.

The thing that I wonder the most about is arguably something that actually doesn't matter. 🙃
When you ask it to read in a .json.br file, there's less data to ask from the disk into memory. That's a win. You lose on CPU work but gain on disk I/O. But only the end net result matters so in a sense that's just an "implementation detail".

Admittedly, I don't know if the macOS or the Linux kernel does things with caching the layer between the physical disk and RAM for these files. The benchmark effectively asks "Hey, hard disk, please send me a file called ..." and this could be cached in some layer beyond my knowledge/comprehension. In a real production server, this only happens once because once the whole file is read, decompressed, and parsed, it won't be asked for again. Like, ever. But in a benchmark, perhaps the very first ask of the file is slower and all the other runs are unrealistically faster.

Feel free to clone https://github.com/peterbe/reading-json-files and mess around to run your own tests. Perhaps see what effect async can have. Or perhaps try it with Bun and it's file system API.

fnm is much faster than nvm.

December 28, 2023
1 comment Node, macOS

I used nvm so that when I cd into a different repo, it would automatically load the appropriate version of node (and npm). Simply by doing cd ~/dev/remix-peterbecom, for example, it would make the node executable to become whatever the value of the optional file ~/dev/remix-peterbecom/.nvmrc's content. For example v18.19.0.
And nvm helps you install and get your hands on various versions of node to be able to switch between. Much more fine-tuned than brew install node20.

The problem with all of this is that it's horribly slow. Opening a new terminal is annoyingly slow because that triggers the entering of a directory and nvm slowly does what it does.

The solution is to ditch it and go for fnm instead. Please, if you're an nvm user, do consider making this same jump in 2024.

Installation

Running curl -fsSL https://fnm.vercel.app/install | bash basically does some brew install and figuring out what shell you have and editing your shell config. By default, it put:


export PATH="/Users/peterbe/Library/Application Support/fnm:$PATH"
eval "`fnm env`"

...into my .zshrc file. But, I later learned you need to edit the last line to:


-eval "`fnm env`"
+eval "$(fnm env --use-on-cd)"

so that it automatically activates immediately after you've cd'ed into a directory.
If you had direnv to do this, get rid of that. fmn does not need direnv.

Now, create a fresh new terminal and it should be set up, including tab completion. You can test it by typing fnm[TAB]. You'll see:


❯ fnm
alias                   -- Alias a version to a common name
completions             -- Print shell completions to stdout
current                 -- Print the current Node.js version
default                 -- Set a version as the default version
env                     -- Print and set up required environment variables for fnm
exec                    -- Run a command within fnm context
help                    -- Print this message or the help of the given subcommand(s)
install                 -- Install a new Node.js version
list         ls         -- List all locally installed Node.js versions
list-remote  ls-remote  -- List all remote Node.js versions
unalias                 -- Remove an alias definition
uninstall               -- Uninstall a Node.js version
use                     -- Change Node.js version

Usage

If you had .nvmrc files sprinkled about from before, fnm will read those. If you cd into a directory, that contains .nvmrc, whose version fnm hasn't installed, yet, you get this:


❯ cd ~/dev/GROCER/groce/
Can't find an installed Node version matching v16.14.2.
Do you want to install it? answer [y/N]:

Neat!

But if you want to set it up from scratch, go into your directory of choice, type:


fnm ls-remote

...to see what versions of node you can install. Suppose you want v20.10.0 in the current directory do these two commands:


fnm install v20.10.0
echo v20.10.0 > .node-version

That's it!

Notes

  • I prefer that .node-version convention so I've been going around doing mv .nvmrc .node-version in various projects

  • fnm ls is handy to see which ones you've installed already

  • Suppose you want to temporarily use a specific version, simply type fnm use v16.20.2 for example

  • I heard good things about volta too but got a bit nervous when I found out it gets involved in installing packages and not just versions of node.

  • fnm does not concern itself with upgrading your node versions. To get the latest version of node v21.x, it's up to you to check fnm ls-remote and compare that with the output of node --version.

Comparing different efforts with WebP in Sharp

October 5, 2023
0 comments Node, JavaScript

When you, in a Node program, use sharp to convert an image buffer to a WebP buffer, you have an option of effort. The higher the number the longer it takes but the image it produces is smaller on disk.

I wanted to put some realistic numbers for this, so I wrote a benchmark, run on my Intel MacbookPro.

The benchmark

It looks like this:


async function e6() {
  return await f("screenshot-1000.png", 6);
}
async function e5() {
  return await f("screenshot-1000.png", 5);
}
async function e4() {
  return await f("screenshot-1000.png", 4);
}
async function e3() {
  return await f("screenshot-1000.png", 3);
}
async function e2() {
  return await f("screenshot-1000.png", 2);
}
async function e1() {
  return await f("screenshot-1000.png", 1);
}
async function e0() {
  return await f("screenshot-1000.png", 0);
}

async function f(fp, effort) {
  const originalBuffer = await fs.readFile(fp);
  const image = sharp(originalBuffer);
  const { width } = await image.metadata();
  const buffer = await image.webp({ effort }).toBuffer();
  return [buffer.length, width, { effort }];
}

Then, I ran each function in serial and measured how long it took. Then, do that whole thing 15 times. So, in total, each function is executed 15 times. The numbers are collected and the median (P50) is reported.

A 2000x2000 pixel PNG image

1. e0: 191ms                   235KB
2. e1: 340.5ms                 208KB
3. e2: 369ms                   198KB
4. e3: 485.5ms                 193KB
5. e4: 587ms                   177KB
6. e5: 695.5ms                 177KB
7. e6: 4811.5ms                142KB

What it means is that if you use {effort: 6} the conversion of a 2000x2000 PNG took 4.8 seconds but the resulting WebP buffer became 142KB instead of the least effort which made it 235 KB.

Comparing effort, time and size

This graph demonstrates how the (blue) time goes up the more effort you put in. And how the final size (red) goes down the more effort you put in.

A 1000x1000 pixel PNG image

1. e0: 54ms                    70KB
2. e1: 60ms                    66KB
3. e2: 65ms                    61KB
4. e3: 96ms                    59KB
5. e4: 169ms                   53KB
6. e5: 193ms                   53KB
7. e6: 1466ms                  51KB

A 500x500 pixel PNG image

1. e0: 24ms                    23KB
2. e1: 26ms                    21KB
3. e2: 28ms                    20KB
4. e3: 37ms                    19KB
5. e4: 57ms                    18KB
6. e5: 66ms                    18KB
7. e6: 556ms                   18KB

Conclusion

Up to you but clearly, {effort: 6} is to be avoided if you're worried about it taking a huge amount of time to make the conversion.

Perhaps the takeaway is; that if you run these operations in the build step such that you don't have to ever do it again, it's worth the maximum effort. Beyond that, find a sweet spot for your particular environment and challenge.

Introducing hylite - a Node code-syntax-to-HTML highlighter written in Bun

October 3, 2023
0 comments Node, Bun, JavaScript

hylite is a command line tool for syntax highlight code into HTML. You feed it a file or some snippet of code (plus what language it is) and it returns a string of HTML.

Suppose you have:


❯ cat example.py
# This is example.py
def hello():
    return "world"

When you run this through hylite you get:


❯ npx hylite example.py
<span class="hljs-keyword">def</span> <span class="hljs-title function_">hello</span>():
    <span class="hljs-keyword">return</span> <span class="hljs-string">&quot;world&quot;</span>

Now, if installed with the necessary CSS, it can finally render this:


# This is example.py
def hello():
    return "world"

(Note: At the time of writing this, npx hylite --list-css or npx hylite --css don't work unless you've git clone the github.com/peterbe/hylite repo)

How I use it

This originated because I loved how highlight.js works. It supports numerous languages, can even guess the language, is fast as heck, and the HTML output is compact.

Originally, my personal website, whose backend is in Python/Django, was using Pygments to do the syntax highlighting. The problem with that is it doesn't support JSX (or TSX). For example:


export function Bell({ color }: {color: string}) {
  return <div style={{ backgroundColor: color }}>Ding!</div>
}

The problem is that Python != Node so to call out to hylite I use a sub-process. At the moment, I can't use bunx or npx because that depends on $PATH and stuff that the server doesn't have. Here's how I call hylite from Python:


command = settings.HYLITE_COMMAND.split()
assert language
command.extend(["--language", language, "--wrapped"])
process = subprocess.Popen(
    command,
    stdin=subprocess.PIPE,
    stdout=subprocess.PIPE,
    stderr=subprocess.PIPE,
    text=True,
    cwd=settings.HYLITE_DIRECTORY,
)
process.stdin.write(code)
output, error = process.communicate()

The settings are:


HYLITE_DIRECTORY = "/home/django/hylite"
HYLITE_COMMAND = "node dist/index.js"

How I built hylite

What's different about hylite compared to other JavaScript packages and CLIs like this is that the development requires Bun. It's lovely because it has a built-in test runner, TypeScript transpiler, and it's just so lovely fast at starting for anything you do with it.

In my current view, I see Bun as an equivalent of TypeScript. It's convenient when developing but once stripped away it's just good old JavaScript and you don't have to worry about compatibility.

So I use bun for manual testing like bun run src/index.ts < foo.go but when it comes time to ship, I run bun run build (which executes, with bun, the src/build.ts) which then builds a dist/index.js file which you can run with either node or bun anywhere.

By the way, the README as a section on Benchmarking. It concludes two things:

  1. node dist/index.js has the same performance as bun run dist/index.js
  2. bunx hylite is 7x times faster than npx hylite but it's bullcrap because bunx doesn't check the network if there's a new version (...until you restart your computer)
Previous page
Next page