Filtered by Node

Page 3

Reset

My site's now NextJS - And I (almost) regret it already

December 17, 2021
8 comments React, Django, Node, JavaScript

My personal blog was a regular Django website with jQuery (later switched to Cash) for dynamic bits. In December 2021 I rewrote it in NextJS. It was a fun journey and NextJS is great but it's really not without some regrets.

Some flashpoints for note and comparison:

React SSR is awesome

The way infinitely nested comments are rendered is isomorphic now. Before I had to code it once as a Jinja2 template thing and once as a Cash (a fork of jQuery) thing. That's the nice and the promise of JavaScript React and server-side rendering.

JS bloat

The total JS payload is now ~111KB in 16 files. It used to be ~36KB in 7 files. :(

Before

Before

After

After

Data still comes from Django

Like any website, the web pages are made up from A) getting the raw data from a database, B) rendering that data in HTML.
I didn't want to rewrite all the database queries in Node (inside getServerSideProps).

What I did was I moved all the data gathering Django code and put them under a /api/v1/ prefix publishing simple JSON blobs. Then this is exposed on 127.0.0.1:3000 which the Node server fetches. And I wired up that that API endpoint so I can debug it via the web too. E.g. /api/v1/plog/sort-a-javascript-array-by-some-boolean-operation

Now, all I have to do is write some TypeScript interfaces that hopefully match the JSON that comes from Django. For example, here's the getServerSideProps code for getting the data to this page:


const url = `${API_BASE}/api/v1/plog/`;
const response = await fetch(url);
if (!response.ok) {
  throw new Error(`${response.status} on ${url}`);
}
const data: ServerData = await response.json();
const { groups } = data;

return {
  props: {
    groups,
  },
};

I like this pattern! Yes, there are overheads and Node could talk directly to PostgreSQL but the upside is decoupling. And with good outside caching, performance never matters.

Server + CDN > static site generation

I considered full-blown static generation, but it's not an option. My little blog only has about 1,400 blog posts but you can also filter by tags and combinations of tags and pagination of combinations of tags. E.g. /oc-JavaScript/oc-Python/p3 So the total number of pages is probably in the tens of thousands.

So, server-side rendering it is. To accomplish that I set up a very simple Express server. It proxies some stuff over to the Django server (e.g. /rss.xml) and then lets NextJS handle the rest.


import next from "next";
import express from "express";

const app = next();
const handle = app.getRequestHandler();

app
  .prepare()
  .then(() => {
    const server = express();

    server.use(handle);

    server.listen(port, (err) => {
      if (err) throw err;
      console.log(`> Ready on http://localhost:${port}`);
    });
  })

Now, my site is behind a CDN. And technically, it's behind Nginx too where I do some proxy_pass in-memory caching as a second line of defense.
Requests come in like this:

  1. from user to CDN
  2. from CDN to Nginx
  3. from Nginx to Express (proxy_pass)
  4. from Express to next().getRequestHandler()

And I set Cache-Control in res.setHeader("Cache-Control", "public,max-age=86400") from within the getServerSideProps functions in the src/pages/**/*.tsx files. And once that's set, the response will be cached both in Nginx and in the CDN.

Any caching is tricky when you need to do revalidation. Especially when you roll out a new central feature in the core bundle. But I quite like this pattern of a slow-rolling upgrade as individual pages eventually expire throughout the day.

This is a nasty bug with this and I don't yet know how to solve it. Client-side navigation is dependent of hashing. So loading this page, when done with client-side navigation, becomes /_next/data/2ps5rE-K6E39AoF4G6G-0/en/plog.json (no, I don't know how that hashed URL is determined). But if a new deployment happens, the new URL becomes /_next/data/UhK9ANa6t5p5oFg3LZ5dy/en/plog.json so you end up with a 404 because you started on a page based on an old JavaScript bundle, that is now invalid.

Thankfully, NextJS handles it quite gracefully by throwing an error on the 404 so it proceeds with a regular link redirect which takes you away from the old page.

Client-side navigation still sucks. Kinda.

Next has a built-in <Link> component that you use like this:


import Link from "next/link";

...

<Link href={"/plog/" + post.oid}>
  {post.title}
</Link>

Now, clicking any of those links will automatically enable client-side routing. Thankfully, it takes care of preloading the necessary JavaScript (and CSS) simply by hovering over the link, so that when you eventually click it just needs to do an XHR request to get the JSON necessary to be able to render the page within the loaded app (and then do the pushState stuff to change the URL accordingly).

It sounds good in theory but it kinda sucks because unless you have a really good Internet connection (or could be you hit upon a CDN-cold URL), nothing happens when you click. This isn't NextJS's fault, but I wonder if it's actually horribly for users.

Yes, it sucks that a user clicks something but nothing happens. (I think it would be better if it was a button-press and not a link because buttons feel more like an app whereas links have deeply ingrained UX expectations). But most of the time, it's honestly very fast and when it works it's a nice experience. It's a great piece of functionality for more app'y sites, but less good for websites whose most of the traffic comes from direct links or Google searches.

NextJS has built-in critical CSS optimization

Critical inline CSS is critical (pun intended) for web performance. Especially on my poor site where I depend on a bloated (and now ancient) CSS framework called Semantic-UI. Without inline CSS, the minified CSS file would become over 200KB.

In NextJS, to enable inline critical CSS loading you just need to add this to your next.config.js:


    experimental: { optimizeCss: true },

and you have to add critters to your package.json. I've found some bugs with it but nothing I can't work around.

Conclusion and what's next

I'm very familiar and experienced with React but NextJS is new to me. I've managed to miss it all these years. Until now. So there's still a lot to learn. With other frameworks, I've always been comfortable that I don't actually understand how Webpack and Babel work (internally) but at least I understood when and how I was calling/depending on it. Now, with NextJS there's a lot of abstracted magic that I don't quite understand. It's hard to let go of that. It's hard to get powerful tools that are complex and created by large groups of people and understand it all too. If you're desperate to understand exactly how something works, you inevitably have to scale back the amount of stuff you're leveraging. (Note, it might be different if it's absolute core to what you do for work and hack on for 8 hours a day)

The JavaScript bundles in NextJS lazy-load quite decently but it's definitely more bloat than it needs to be. It's up to me to fix it, partially, because much of the JS code on my site is for things that technically can wait such as the interactive commenting form and the auto-complete search.

But here's the rub; my site is not an app. Most traffic comes from people doing a Google search, clicking on my page, and then bugger off. It's quite static that way and who am I to assume that they'll stay and click around and reuse all that loaded JavaScript code.

With that said; I'm going to start an experiment to rewrite the site again in Remix.

Brotli compression quality comparison in the real world

December 1, 2021
2 comments Node, JavaScript

At work, we use Brotli (using the Node builtin zlib) to compress these large .json files to .json.br files. When using zlib.brotliCompress you can set options to override the quality number. Here's an example of it at quality 6:


import { promisify } from 'util'
import zlib from 'zlib'
const brotliCompress = promisify(zlib.brotliCompress)

const options = {
  params: {
    [zlib.constants.BROTLI_PARAM_MODE]: zlib.constants.BROTLI_MODE_TEXT,
    [zlib.constants.BROTLI_PARAM_QUALITY]: 6,
  },
}

export async function compress(data) {
  return brotliCompress(data, options)
}

But what if you mess with that number. Surely, the files will become smaller, but at what cost? Well, I wrote a Node script that measured how long it would take to compress 6 large (~25MB each) .json file synchronously. Then, I put them into a Google spreadsheet and voila:

Size

Total size per level

Time

Total seconds per level

Miles away from rocket science but I thought it was cute to visualize as a way of understanding the quality option.

How to bulk-insert Firestore documents in a Firebase Cloud function

September 23, 2021
1 comment Node, Firebase, JavaScript

You can't batch-add/bulk-insert documents in the Firebase Web SDK. But you can with the Firebase Admin Node SDK. Like, in a Firebase Cloud Function. Here's an example of how to do that:


const firestore = admin.firestore();
let batch = firestore.batch();
let counter = 0;
let totalCounter = 0;
const promises = [];
for (const thing of MANY_MANY_THINGS) {
  counter++;
  const docRef = firestore.collection("MY_COLLECTION").doc();
  batch.set(docRef, {
    foo: thing.foo,
    bar: thing.bar,
    favNumber: 0,
  });
  counter++;
  if (counter >= 500) {
    console.log(`Committing batch of ${counter}`);
    promises.push(batch.commit());
    totalCounter += counter;
    counter = 0;
    batch = firestore.batch();
  }
}
if (counter) {
  console.log(`Committing batch of ${counter}`);
  promises.push(batch.commit());
  totalCounter += counter;
}
await Promise.all(promises);
console.log(`Committed total of ${totalCounter}`);

I'm using this in a Cloud HTTP function where I can submit a large amount of data and have each one fill up a collection.

In JavaScript (Node) which is fastest, generator function or a big array function?

March 5, 2021
0 comments Node, JavaScript

Sorry about the weird title of this blog post. Not sure what else to call it.

I have a function that recursively traverses the file system. You can iterate over this function to do something with each found file on disk. Silly example:


for (const filePath of walker("/lots/of/files/here")) {
  count += filePath.length;
}

The implementation looks like this:


function* walker(root) {
  const files = fs.readdirSync(root);
  for (const name of files) {
    const filepath = path.join(root, name);
    const isDirectory = fs.statSync(filepath).isDirectory();
    if (isDirectory) {
      yield* walker(filepath);
    } else {
      yield filepath;
    }
  }
}

But I wondered; is it faster to not use a generator function since there might an overhead in swapping from the generator to whatever callback does something with each yielded thing. A pure big-array function looks like this:


function walker(root) {
  const files = fs.readdirSync(root);
  const all = [];
  for (const name of files) {
    const filepath = path.join(root, name);
    const isDirectory = fs.statSync(filepath).isDirectory();
    if (isDirectory) {
      all.push(...walker(filepath));
    } else {
      all.push(filepath);
    }
  }
  return all;
}

It gets the same result/outcome.

It's hard to measure this but I pointed it to some large directory with many files and did something silly with each one just to make sure it does something:


const label = "generator";
console.time(label);
let count = 0;
for (const filePath of walker(SEARCH_ROOT)) {
  count += filePath.length;
}
console.timeEnd(label);
const heapBytes = process.memoryUsage().heapUsed;
console.log(`HEAP: ${(heapBytes / 1024.0).toFixed(1)}KB`);

I ran it a bunch of times. After a while, the numbers settle and you get:

  • Generator function: (median time) 1.74s
  • Big array function: (median time) 1.73s

In other words, no speed difference.

Obviously building up a massive array in memory will increase the heap memory usage. Taking a snapshot at the end of the run and printing it each time, you can see that...

  • Generator function: (median heap memory) 4.9MB
  • Big array function: (median heap memory) 13.9MB

Conclusion

The potential swap overhead for a Node generator function is absolutely minuscule. At least in contexts similar to mine.

It's not unexpected that the generator function bounds less heap memory because it doesn't build up a big array at all.

What's lighter than ExpressJS?

February 25, 2021
0 comments Node, JavaScript

tl;dr; polka is the lightest Node HTTP server package.

Highly unscientific but nevertheless worth writing down. Lightest here refers to the eventual weight added to the node_modules directory which is a reflection of network and disk use.

When you write a serious web server in Node you probably don't care about which one is lightest. It's probably more important which ones are actively maintained, reliable, well documented, and generally "more familiar". However, I was interested in setting up a little Node HTTP server for the benefit of wrapping some HTTP endpoints for an integration test suite.

The test

In a fresh new directory, right after having run: yarn init -y run the yarn add ... and see how big the node_modules directory becomes afterward (du -sh node_modules).

The results

  1. polka: 116K
  2. koa: 1.7M
  3. express: 2.4M
  4. fastify: 8.0M

bar chart

Conclusion

polka is the lightest. But I'm not so sure it matters. But it could if this has to be installed a lot. For example, in CI where you run that yarn install a lot. Then it might save quite a bit of electricity for the planet.

The best and simplest way to parse an RSS feed in Node

February 13, 2021
0 comments Node, JavaScript

There are a lot of 'rss' related NPM packages but I think I've found a combination that is great for parsing RSS feeds. Something that takes up the minimal node_modules and works great. I think the killer combination is

The code impressively simple:


const got = require("got");
const parser = require("fast-xml-parser");

(async function main() {
  const buffer = await got("https://hacks.mozilla.org/feed/", {
    responseType: "buffer",
    resolveBodyOnly: true,
    timeout: 5000,
    retry: 5,
  });
  var feed = parser.parse(buffer.toString());
  for (const item of feed.rss.channel.item) {
    console.log({ title: item.title, url: item.link });
    break;
  }
})();


// Outputs...
// {
//   title: 'MDN localization update, February 2021',
//   url: 'https://hacks.mozilla.org/2021/02/mdn-localization-update-february-2021/'
// }

I like about fast-xml-parser is that it has no dependencies. And it's tiny:

▶ du -sh node_modules/fast-xml-parser
104K    node_modules/fast-xml-parser

The got package is quite a bit larger and has more dependencies. But I still love it. It's proven itself to be very reliable and very pleasant API. Both packages support TypeScript too.

A particular detail I like about fast-xml-parser is that it doesn't try to do the downloading part too. This way, I can use my own preferred library and I could potentially write my own caching code if I want to protect against flaky network.

sharp vs. jimp - Node libraries to make thumbnail images

December 15, 2020
3 comments Node, JavaScript, Firebase

I recently wrote a Google Firebase Cloud function that resizes images on-the-fly and after having published that I discovered that sharp is "better" than jimp. And by better I mean better performance.

To reach this conclusion I wrote a simple trick that loops over a bunch of .png and .jpg files I had lying around and compare how long it took each implementation to do that. Here are the results:

Using jimp

▶ node index.js ~/Downloads
Sum size before: 41.1 MB (27 files)
...
Took: 28.278s
Sum size after: 337 KB

Using sharp

▶ node index.js ~/Downloads
Sum size before: 41.1 MB (27 files)
...
Took: 1.277s
Sum size after: 200 KB

The files are in the region of 100-500KB, a couple that are 1-3MB, and 1 that is 18MB.

So basically: 28 seconds for jimp and 1.3 seconds for sharp

Bonus, the code

Don't ridicule me for my benchmarking code. These are quick hacks. Let's focus on the point.

sharp


function f1(sourcePath, destination) {
  return readFile(sourcePath).then((buffer) => {
    console.log(sourcePath, "is", humanFileSize(buffer.length));
    return sharp(sourcePath)
      .rotate()
      .resize(100)
      .toBuffer()
      .then((data) => {
        const destPath = path.join(destination, path.basename(sourcePath));
        return writeFile(destPath, data).then(() => {
          return stat(destPath).then((s) => s.size);
        });
      });
  });
}

jimp


function f2(sourcePath, destination) {
  return readFile(sourcePath).then((buffer) => {
    console.log(sourcePath, "is", humanFileSize(buffer.length));
    return Jimp.read(sourcePath).then((img) => {
      const destPath = path.join(destination, path.basename(sourcePath));
      img.resize(100, Jimp.AUTO);
      return img.writeAsync(destPath).then(() => {
        return stat(destPath).then((s) => s.size);
      });
    });
  });
}

I test them like this:


console.time("Took");
const res = await Promise.all(files.map((file) => f1(file, destination)));
console.timeEnd("Took");

And just to be absolutely sure, I run them separately so the whole process is dedicated to one implementation.

downloadAndResize - Firebase Cloud Function to serve thumbnails

December 8, 2020
0 comments Web development, That's Groce!, Node, JavaScript

UPDATE 2020-12-30

With sharp after you've loaded the image (sharp(contents)) make sure to add .rotate() so it automatically rotates the image correctly based on EXIF data.

UPDATE 2020-12-13

I discovered that sharp is much better than jimp. It's order of maginitude faster. And it's actually what the Firebase Resize Images extension uses. Code updated below.

I have a Firebase app that uses the Firebase Cloud Storage to upload images. But now I need thumbnails. So I wrote a cloud function that can generate thumbnails on-the-fly.

There's a Firebase Extension called Resize Images which is nicely done but I just don't like that strategy. At least not for my app. Firstly, I'm forced to pick the right size(s) for thumbnails and I can't really go back on that. If I pick 50x50, 1000x1000 as my sizes, and depend on that in the app, and then realize that I actually want it to be 150x150, 500x500 then I'm quite stuck.

Instead, I want to pick any thumbnail sizes dynamically. One option would be a third-party service like imgix, CloudImage, or Cloudinary but these are not free and besides, I'll need to figure out how to upload the images there. There are other Open Source options like picfit which you install yourself but that's not an attractive option with its implicit complexity for a side-project. I want to stay in the Google Cloud. Another option would be this AppEngine function by Albert Chen which looks nice but then I need to figure out the access control between that and my Firebase Cloud Storage. Also, added complexity.

As part of your app initialization in Firebase, it automatically has access to the appropriate storage bucket. If I do:


const storageRef = storage.ref();
uploadTask = storageRef.child('images/photo.jpg').put(file, metadata);
...

...in the Firebase app, it means I can do:


 admin
      .storage()
      .bucket()
      .file('images/photo.jpg')
      .download()
      .then((downloadData) => {
        const contents = downloadData[0];

...in my cloud function and it just works!

And to do the resizing I use Jimp which is TypeScript aware and easy to use. Now, remember this isn't perfect or mature but it works. It solves my needs and perhaps it will solve your needs too. Or, at least it might be a good start for your application that you can build on. Here's the function (in functions/src/index.ts):


interface StorageErrorType extends Error {
  code: number;
}

const codeToErrorMap: Map<number, string> = new Map();
codeToErrorMap.set(404, "not found");
codeToErrorMap.set(403, "forbidden");
codeToErrorMap.set(401, "unauthenticated");

export const downloadAndResize = functions
  .runWith({ memory: "1GB" })
  .https.onRequest(async (req, res) => {
    const imagePath = req.query.image || "";
    if (!imagePath) {
      res.status(400).send("missing 'image'");
      return;
    }
    if (typeof imagePath !== "string") {
      res.status(400).send("can only be one 'image'");
      return;
    }
    const widthString = req.query.width || "";
    if (!widthString || typeof widthString !== "string") {
      res.status(400).send("missing 'width' or not a single string");
      return;
    }
    const extension = imagePath.toLowerCase().split(".").slice(-1)[0];
    if (!["jpg", "png", "jpeg"].includes(extension)) {
      res.status(400).send(`invalid extension (${extension})`);
      return;
    }
    let width = 0;
    try {
      width = parseInt(widthString);
      if (width < 0) {
        throw new Error("too small");
      }
      if (width > 1000) {
        throw new Error("too big");
      }
    } catch (error) {
      res.status(400).send(`width invalid (${error.toString()}`);
      return;
    }

    admin
      .storage()
      .bucket()
      .file(imagePath)
      .download()
      .then((downloadData) => {
        const contents = downloadData[0];
        console.log(
          `downloadAndResize (${JSON.stringify({
            width,
            imagePath,
          })}) downloadData.length=${humanFileSize(contents.length)}\n`
        );

        const contentType = extension === "png" ? "image/png" : "image/jpeg";
        sharp(contents)
          .rotate()
          .resize(width)
          .toBuffer()
          .then((buffer) => {
            res.setHeader("content-type", contentType);
            // TODO increase some day
            res.setHeader("cache-control", `public,max-age=${60 * 60 * 24}`);
            res.send(buffer);
          })
          .catch((error: Error) => {
            console.error(`Error reading in with sharp: ${error.toString()}`);
            res
              .status(500)
              .send(`Unable to read in image: ${error.toString()}`);
          });
      })
      .catch((error: StorageErrorType) => {
        if (error.code && codeToErrorMap.has(error.code)) {
          res.status(error.code).send(codeToErrorMap.get(error.code));
        } else {
          res.status(500).send(error.message);
        }
      });
  });

function humanFileSize(size: number): string {
  if (size < 1024) return `${size} B`;
  const i = Math.floor(Math.log(size) / Math.log(1024));
  const num = size / Math.pow(1024, i);
  const round = Math.round(num);
  const numStr: string | number =
    round < 10 ? num.toFixed(2) : round < 100 ? num.toFixed(1) : round;
  return `${numStr} ${"KMGTPEZY"[i - 1]}B`;
}

Here's what a sample URL looks like.

I hope it helps!

I think the next thing for me to consider is to extend this so it uploads the thumbnail back and uses the getDownloadURL() of the created thumbnail as a redirect instead. It would be transparent to the app but saves on repeated views. That'd be a good optimization.

Quick comparison between sass and node-sass

September 10, 2020
3 comments Node, JavaScript

To transpile .scss (or .sass) in Node you have the choice between sass and node-sass. sass is a JavaScript compilation of Dart Sass which is supposedly "the primary implementation of Sass" which is a pretty powerful statement. node-sass on the other hand is a wrapper on LibSass which is written in C++. Let's break it down a little bit more.

Speed

node-sass is faster. About 7 times faster. I took all the SCSS files behind the current MDN Web Docs which is fairly large. Transformed into CSS it becomes a ~180KB blob of CSS (92KB when optimized with csso).

Here's my ugly benchmark test which I run about 10 times like this:

node-sass took 101ms result 180kb 92kb
node-sass took 99ms result 180kb 92kb
node-sass took 99ms result 180kb 92kb
node-sass took 100ms result 180kb 92kb
node-sass took 100ms result 180kb 92kb
node-sass took 103ms result 180kb 92kb
node-sass took 102ms result 180kb 92kb
node-sass took 113ms result 180kb 92kb
node-sass took 100ms result 180kb 92kb
node-sass took 101ms result 180kb 92kb

And here's the same thing for sass:

sass took 751ms result 173kb 92kb
sass took 728ms result 173kb 92kb
sass took 728ms result 173kb 92kb
sass took 798ms result 173kb 92kb
sass took 854ms result 173kb 92kb
sass took 726ms result 173kb 92kb
sass took 727ms result 173kb 92kb
sass took 782ms result 173kb 92kb
sass took 834ms result 173kb 92kb

In another example, I ran sass and node-sass on ./node_modules/bootstrap/scss/bootstrap.scss (version 5.0.0-alpha1) and the results are after 5 runs:

node-sass took 269ms result 176kb 139kb
node-sass took 260ms result 176kb 139kb
node-sass took 288ms result 176kb 139kb
node-sass took 261ms result 176kb 139kb
node-sass took 260ms result 176kb 139kb

versus

sass took 1423ms result 176kb 139kb
sass took 1350ms result 176kb 139kb
sass took 1338ms result 176kb 139kb
sass took 1368ms result 176kb 139kb
sass took 1467ms result 176kb 139kb

Output

The unminified CSS difference primarily in the indentation. But you minify both outputs and the pretty print them (with prettier) you get the following difference:


▶ diff /tmp/sass.min.css.pretty /tmp/node-sass.min.css.pretty
152c152
<   letter-spacing: -0.0027777778rem;
---
>   letter-spacing: -0.00278rem;
231c231
<   content: "▼︎";
---
>   content: "\25BC\FE0E";

...snip...


2804c2812
< .external-icon:not([href^="https://mdn.mozillademos.org"]):not(.ignore-external) {
---
> .external-icon:not([href^='https://mdn.mozillademos.org']):not(.ignore-external) {

Basically, sass will use produce things like letter-spacing: -0.0027777778rem; and content: "▼︎";. And node-sass will produce letter-spacing: -0.00278rem; and content: "\25BC\FE0E";.
I also noticed some minor difference just in the order of some selectors but when I look more carefully, they're immaterial order differences meaning they're not cascading each other in any way.

Note! I don't know why the use of ' and " is different or if it matters. I don't know know why prettier (version 2.1.1) didn't pick one over the other consistently.

node_modules

Here's how I created two projects to compare


cd /tmp
mkdir just-sass && cd just-sass && yarn init -y && time yarn add sass && cd ..
mkdir just-node-sass && cd just-node-sass && yarn init -y && time yarn add node-sass && cd ..

Considering that sass is just a JavaScript compilation of a Dart program, all you get is basically a 3.6MB node_modules/sass/sass.dart.js file.

The /tmp/just-sass/node_modules directory is only 113 files and folders weighing a total of 4.1MB.
Whereas /tmp/just-node-sass/node_modules directory is 3,658 files and folders weighing a total of 15.2MB.

I don't know about you but I'm very skeptical that node-gyp ever works. Who even has Python 2.7 installed anymore? Being able to avoid node-gyp seems like a win for sass.

Conclusion

The speed difference may or may not matter. If you're only doing it once, who cares about a couple of hundred milliseconds. But if you're forced to have to wait 1.4 seconds on every Ctrl-S when Webpack or whatever tooling you have starts up sass it might become very painful.

I don't know much about the sass-loader Webpack plugin but it apparently works with either but they do recommend sass in their documentation. And it's the default implementation too.

It's definitely a feather in sass's hat that Dart Sass is the "primary implementation" of Sass. That just has a nice feelin in sass's favor.

Bonus

NPMCompare has a nice comparison of them as projects but you have to study each row of numbers because it's rarely as simple as more (or less) number is better. For example, the number of open issues isn't a measure of bugs.

The new module system launched in October 2019 supposedly only comes to Dart Sass which means sass is definitely going to get it first. If that stuff matters to you. For example, true, the Sass unit-testing tool, now requires Dart Sass and drops support for node-sass.

findMatchesInText - Find line and column of matches in a text, in JavaScript

June 22, 2020
0 comments Node, JavaScript

I need this function to relate to open-editor which is a Node program that can open your $EDITOR from Node and jump to a specific file, to a specific line, to a specific column.

Here's the code:


function* findMatchesInText(needle, haystack, { inQuotes = false } = {}) {
  const escaped = needle.replace(/[.*+?^${}()|[\]\\]/g, "\\$&");
  let rex;
  if (inQuotes) {
    rex = new RegExp(`['"](${escaped})['"]`, "g");
  } else {
    rex = new RegExp(`(${escaped})`, "g");
  }
  for (const match of haystack.matchAll(rex)) {
    const left = haystack.slice(0, match.index);
    const line = (left.match(/\n/g) || []).length + 1;
    const lastIndexOf = left.lastIndexOf("\n") + 1;
    const column = match.index - lastIndexOf + 1;
    yield { line, column };
  }
}

And you use it like this:


const text = ` bravo
Abra
cadabra

bravo
`;

console.log(Array.from(findMatchesInText("bra", text)));

Which prints:


[
  { line: 1, column: 2 },
  { line: 2, column: 2 },
  { line: 3, column: 5 },
  { line: 5, column: 1 }
]

The inQuotes option is because a lot of times this function is going to be used for finding the href value in unstructured documents that contain HTML <a> tags.