Filtered by JavaScript, Python

Page 8

Reset

downloadAndResize - Firebase Cloud Function to serve thumbnails

December 8, 2020
0 comments Web development, That's Groce!, Node, JavaScript

UPDATE 2020-12-30

With sharp after you've loaded the image (sharp(contents)) make sure to add .rotate() so it automatically rotates the image correctly based on EXIF data.

UPDATE 2020-12-13

I discovered that sharp is much better than jimp. It's order of maginitude faster. And it's actually what the Firebase Resize Images extension uses. Code updated below.

I have a Firebase app that uses the Firebase Cloud Storage to upload images. But now I need thumbnails. So I wrote a cloud function that can generate thumbnails on-the-fly.

There's a Firebase Extension called Resize Images which is nicely done but I just don't like that strategy. At least not for my app. Firstly, I'm forced to pick the right size(s) for thumbnails and I can't really go back on that. If I pick 50x50, 1000x1000 as my sizes, and depend on that in the app, and then realize that I actually want it to be 150x150, 500x500 then I'm quite stuck.

Instead, I want to pick any thumbnail sizes dynamically. One option would be a third-party service like imgix, CloudImage, or Cloudinary but these are not free and besides, I'll need to figure out how to upload the images there. There are other Open Source options like picfit which you install yourself but that's not an attractive option with its implicit complexity for a side-project. I want to stay in the Google Cloud. Another option would be this AppEngine function by Albert Chen which looks nice but then I need to figure out the access control between that and my Firebase Cloud Storage. Also, added complexity.

As part of your app initialization in Firebase, it automatically has access to the appropriate storage bucket. If I do:


const storageRef = storage.ref();
uploadTask = storageRef.child('images/photo.jpg').put(file, metadata);
...

...in the Firebase app, it means I can do:


 admin
      .storage()
      .bucket()
      .file('images/photo.jpg')
      .download()
      .then((downloadData) => {
        const contents = downloadData[0];

...in my cloud function and it just works!

And to do the resizing I use Jimp which is TypeScript aware and easy to use. Now, remember this isn't perfect or mature but it works. It solves my needs and perhaps it will solve your needs too. Or, at least it might be a good start for your application that you can build on. Here's the function (in functions/src/index.ts):


interface StorageErrorType extends Error {
  code: number;
}

const codeToErrorMap: Map<number, string> = new Map();
codeToErrorMap.set(404, "not found");
codeToErrorMap.set(403, "forbidden");
codeToErrorMap.set(401, "unauthenticated");

export const downloadAndResize = functions
  .runWith({ memory: "1GB" })
  .https.onRequest(async (req, res) => {
    const imagePath = req.query.image || "";
    if (!imagePath) {
      res.status(400).send("missing 'image'");
      return;
    }
    if (typeof imagePath !== "string") {
      res.status(400).send("can only be one 'image'");
      return;
    }
    const widthString = req.query.width || "";
    if (!widthString || typeof widthString !== "string") {
      res.status(400).send("missing 'width' or not a single string");
      return;
    }
    const extension = imagePath.toLowerCase().split(".").slice(-1)[0];
    if (!["jpg", "png", "jpeg"].includes(extension)) {
      res.status(400).send(`invalid extension (${extension})`);
      return;
    }
    let width = 0;
    try {
      width = parseInt(widthString);
      if (width < 0) {
        throw new Error("too small");
      }
      if (width > 1000) {
        throw new Error("too big");
      }
    } catch (error) {
      res.status(400).send(`width invalid (${error.toString()}`);
      return;
    }

    admin
      .storage()
      .bucket()
      .file(imagePath)
      .download()
      .then((downloadData) => {
        const contents = downloadData[0];
        console.log(
          `downloadAndResize (${JSON.stringify({
            width,
            imagePath,
          })}) downloadData.length=${humanFileSize(contents.length)}\n`
        );

        const contentType = extension === "png" ? "image/png" : "image/jpeg";
        sharp(contents)
          .rotate()
          .resize(width)
          .toBuffer()
          .then((buffer) => {
            res.setHeader("content-type", contentType);
            // TODO increase some day
            res.setHeader("cache-control", `public,max-age=${60 * 60 * 24}`);
            res.send(buffer);
          })
          .catch((error: Error) => {
            console.error(`Error reading in with sharp: ${error.toString()}`);
            res
              .status(500)
              .send(`Unable to read in image: ${error.toString()}`);
          });
      })
      .catch((error: StorageErrorType) => {
        if (error.code && codeToErrorMap.has(error.code)) {
          res.status(error.code).send(codeToErrorMap.get(error.code));
        } else {
          res.status(500).send(error.message);
        }
      });
  });

function humanFileSize(size: number): string {
  if (size < 1024) return `${size} B`;
  const i = Math.floor(Math.log(size) / Math.log(1024));
  const num = size / Math.pow(1024, i);
  const round = Math.round(num);
  const numStr: string | number =
    round < 10 ? num.toFixed(2) : round < 100 ? num.toFixed(1) : round;
  return `${numStr} ${"KMGTPEZY"[i - 1]}B`;
}

Here's what a sample URL looks like.

I hope it helps!

I think the next thing for me to consider is to extend this so it uploads the thumbnail back and uses the getDownloadURL() of the created thumbnail as a redirect instead. It would be transparent to the app but saves on repeated views. That'd be a good optimization.

Generating random avatar images in Django/Python

October 28, 2020
1 comment Web development, Django, Python

tl;dr; <img src="/avatar.random.png" alt="Random avataaar"> generates this image:

Random avataaar
(try reloading to get a random new one. funny aren't they?)

When you use Gravatar you can convert people's email addresses to their mugshot.
It works like this:


<img src="https://www.gravatar.com/avatar/$(md5(user.email))">

But most people don't have their mugshot on Gravatar.com unfortunately. But you still want to display an avatar that is distinct per user. Your best option is to generate one and just use the user's name or email as a seed (so it's always random but always deterministic for the same user). And you can also supply a fallback image to Gravatar that they use if the email doesn't match any email they have. That's where this blog post comes in.

I needed that so I shopped around and found avataaars generator which is available as a React component. But I need it to be server-side and in Python. And thankfully there's a great port called: py-avataaars.

It depends on CairoSVG to convert an SVG to a PNG but it's easy to install. Anyway, here's my hack to generate random "avataaars" from Django:


import io
import random

import py_avataaars
from django import http
from django.utils.cache import add_never_cache_headers, patch_cache_control


def avatar_image(request, seed=None):
    if not seed:
        seed = request.GET.get("seed") or "random"

    if seed != "random":
        random.seed(seed)

    bytes = io.BytesIO()

    def r(enum_):
        return random.choice(list(enum_))

    avatar = py_avataaars.PyAvataaar(
        style=py_avataaars.AvatarStyle.CIRCLE,
        # style=py_avataaars.AvatarStyle.TRANSPARENT,
        skin_color=r(py_avataaars.SkinColor),
        hair_color=r(py_avataaars.HairColor),
        facial_hair_type=r(py_avataaars.FacialHairType),
        facial_hair_color=r(py_avataaars.FacialHairColor),
        top_type=r(py_avataaars.TopType),
        hat_color=r(py_avataaars.ClotheColor),
        mouth_type=r(py_avataaars.MouthType),
        eye_type=r(py_avataaars.EyesType),
        eyebrow_type=r(py_avataaars.EyebrowType),
        nose_type=r(py_avataaars.NoseType),
        accessories_type=r(py_avataaars.AccessoriesType),
        clothe_type=r(py_avataaars.ClotheType),
        clothe_color=r(py_avataaars.ClotheColor),
        clothe_graphic_type=r(py_avataaars.ClotheGraphicType),
    )
    avatar.render_png_file(bytes)

    response = http.HttpResponse(bytes.getvalue())
    response["content-type"] = "image/png"
    if seed == "random":
        add_never_cache_headers(response)
    else:
        patch_cache_control(response, max_age=60, public=True)

    return response

It's not perfect but it works. The URL to this endpoint is /avatar.<seed>.png and if you make the seed parameter random the response is always different.

To make the image not random, you replace the <seed> with any string. For example (use your imagination):


{% for comment in comments %}
  <img src="/avatar.{{ comment.user.id }}.png" alt="{{ comment.user.name }}">
  <blockquote>{{ comment.text }}</blockquote>
  <i>{{ comment.date }}</i>
{% endfor %}

Progressive CSS rendering with or without data URLs

September 26, 2020
0 comments Web development, Web Performance, JavaScript

You can write your CSS so that it depends on images. Like this:


li.one {
  background-image: url("skull.png");
}

That means that the browser will do its best to style the li.one with what little it has from the CSS. Then, it'll ask the browser to go ahead and network download that skull.png URL.

But, another option is to embed the image as a data URL like this:


li.one{background-image:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAIAAAACACAYAAADDPmHL...rkJggg==)

As a block of CSS, it's much larger but it's one less network call. What if you know that skull.png will be needed? Is it faster to inline it or to leave it as a URL? Let's see!

First of all, I wanted to get a feeling for how much larger an image is in bytes if you transform them to data URLs. Check out this script's output:

▶ ./bin/b64datauri.js src/*.png src/*.svg
src/lizard.png       43,551     58,090     1.3x
src/skull.png        7,870      10,518     1.3x
src/clippy.svg       483        670        1.4x
src/curve.svg        387        542        1.4x
src/dino.svg         909        1,238      1.4x
src/sprite.svg       10,330     13,802     1.3x
src/survey.svg       2,069      2,786      1.3x

Basically, as a blob of data URL, the images become about 1.3x larger. Hopefully, with HTTP2, the headers are cheap for each URL downloaded over the network, but it's not 0. (No idea what the CPU-work multiplier is)

Experiment assumptions and notes

  • When you first calculate the critical CSS, you know that there's no url(data:mime/type;base64,....) that goes to waste. I.e. you didn't put that in the CSS file or HTML file, bloating it, for nothing.
  • The images aren't too large. Mostly icons and fluff.
  • If it's SVG images should probably inline them in the HTML natively so you can control their style.
  • The HTML is compressed for best results.
  • The server is HTTP2

It's a fairly commonly known fact that data URLs have a CPU cost. That base64 needs to be decoded before the image can be decoded by the renderer. So let's stick to fairly small images.

The experiment

I made a page that looks like this:


li {
  background-repeat: no-repeat;
  width: 150px;
  height: 150px;
  margin: 20px;
  background-size: contain;
}
li.one {
  background-image: url("skull.png");
}
li.two {
  background-image: url("dino.svg");
}
li.three {
  background-image: url("clippy.svg");
}
li.four {
  background-image: url("sprite.svg");
}
li.five {
  background-image: url("survey.svg");
}
li.six {
  background-image: url("curve.svg");
}

and


<ol>
  <li class="one">One</li>
  <li class="two">Two</li>
  <li class="three">Three</li>
  <li class="four">Four</li>
  <li class="five">Five</li>
  <li class="six">Six</li>
</ol>

See the whole page here

The page also uses Bootstrap to make it somewhat realistic. Then, using minimalcss combine the external CSS with the CSS inline and produce a page that is just HTML + 1 <style> tag.

Now, based on that page, the variant is that each url($URL) in the CSS gets converted to url(data:mime/type;base64,blablabla...). The HTML is gzipped (and brotli compressed) and put behind a CDN. The URLs are:

Also, there's this page which is without the critical CSS inlined.

To appreciate what this means in terms of size on the HTML, let's compare:

  • inlined.html with external URLs: 2,801 bytes (1,282 gzipped)
  • inlined-datauris.html with data URLs: 32,289 bytes (17,177 gzipped)

Considering that gzip (accept-encoding: gzip,deflate) is almost always used by browsers, that means the page is 15KB more before it can be fully downloaded. (But, it's streamed so maybe the comparison is a bit flawed)

Analysis

WebPagetest.org results here. I love WebPagetest, but the results are usually a bit erratic to be a good enough for comparing. Maybe if you could do the visual comparison repeated times, but I don't think you can.

WebPagetest comparison
WebPagetest visual comparison

And the waterfalls...

WebPagetest waterfall, with regular URLs
With regular URLs

WebPagetest waterfall with data URLs
With data URLs

Fairly expected.

  • With external image URLs, the browser will start to display the CSSOM before the images have downloaded. Meaning, the CSS is render-blocking, but the external images are not.

  • The final result comes in sooner with data URLs.

  • With data URLs you have to stare at a white screen longer.

Next up, using Google Chrome's Performance dev tools panel. Set to 6x CPU slowdown and online with Fast 3G.

I don't know how to demonstrate this other than screenshots:

Performance with external images
Performance with external images

Performance with data URLs
Performance with data URLs

Those screenshots are rough attempts at showing the area when it starts to display the images.

Whole Performance tab with external images
Whole Performance tab with external images

Whole Performance tab with data URLs
Whole Performance tab with data URLs

I ran these things 2 times and the results were pretty steady.

  • With external images, fully loaded at about 2.5 seconds
  • With data URLs, fully loaded at 1.9 seconds

I tried Lighthouse but the difference was indistinguishable.

Summary

Yes, inlining your CSS images is faster. But it's with a slim margin and the disadvantages aren't negligible.

This technique costs more CPU because there's a lot more base64 decoding to be done, and what if you have a big fat JavaScript bundle in there that wants a piece of the CPU? So ask yourself, how valuable is to not hog the CPU. Perhaps someone who understands the browser engines better can tell if the base64 decoding cost is spread nicely onto multiple CPUs or if it would stand in the way of the main thread.

What about anti-progressive rendering

When Facebook redesigned www.facebook.com in mid-2020 one of their conscious decisions was to inline the SVG glyphs into the JavaScript itself.

"To prevent flickering as icons come in after the rest of the content, we inline SVGs into the HTML using React rather than passing SVG files to <img> tags."

Although that comment was about SVGs in the DOM, from a JavaScript perspective, the point is nevertheless relevant to my experiment. If you look closely, at the screenshots above (or you open the URL yourself and hit reload with HTTP caching disabled) the net effect is that the late-loading images do cause a bit of "flicker". It's not flickering as in "now it's here", "now it's gone", "now it's back again". But it's flickering in that things are happening with progressive rendering. Your eyes might get tired and they say to your brain "Wake me up when the whole thing is finished. I can wait."

This topic quickly escalates into perceived performance which is a stratosphere of its own. And personally, I can only estimate and try to speak about my gut reactions.

In conclusion, there are advantages to using data URIs over external images in CSS. But please, first make sure you don't convert the image URLs in a big bloated .css file to data URLs if you're not sure they'll all be needed in the DOM.

Bonus!

If you're not convinced of the power of inlining the critical CSS, check out this WebPagetest run that includes the image where it references the whole bootstrap.min.css as before doing any other optimizations.

With baseline that isn't just the critical CSS
With baseline that isn't just the critical CSS

Quick comparison between sass and node-sass

September 10, 2020
2 comments Node, JavaScript

To transpile .scss (or .sass) in Node you have the choice between sass and node-sass. sass is a JavaScript compilation of Dart Sass which is supposedly "the primary implementation of Sass" which is a pretty powerful statement. node-sass on the other hand is a wrapper on LibSass which is written in C++. Let's break it down a little bit more.

Speed

node-sass is faster. About 7 times faster. I took all the SCSS files behind the current MDN Web Docs which is fairly large. Transformed into CSS it becomes a ~180KB blob of CSS (92KB when optimized with csso).

Here's my ugly benchmark test which I run about 10 times like this:

node-sass took 101ms result 180kb 92kb
node-sass took 99ms result 180kb 92kb
node-sass took 99ms result 180kb 92kb
node-sass took 100ms result 180kb 92kb
node-sass took 100ms result 180kb 92kb
node-sass took 103ms result 180kb 92kb
node-sass took 102ms result 180kb 92kb
node-sass took 113ms result 180kb 92kb
node-sass took 100ms result 180kb 92kb
node-sass took 101ms result 180kb 92kb

And here's the same thing for sass:

sass took 751ms result 173kb 92kb
sass took 728ms result 173kb 92kb
sass took 728ms result 173kb 92kb
sass took 798ms result 173kb 92kb
sass took 854ms result 173kb 92kb
sass took 726ms result 173kb 92kb
sass took 727ms result 173kb 92kb
sass took 782ms result 173kb 92kb
sass took 834ms result 173kb 92kb

In another example, I ran sass and node-sass on ./node_modules/bootstrap/scss/bootstrap.scss (version 5.0.0-alpha1) and the results are after 5 runs:

node-sass took 269ms result 176kb 139kb
node-sass took 260ms result 176kb 139kb
node-sass took 288ms result 176kb 139kb
node-sass took 261ms result 176kb 139kb
node-sass took 260ms result 176kb 139kb

versus

sass took 1423ms result 176kb 139kb
sass took 1350ms result 176kb 139kb
sass took 1338ms result 176kb 139kb
sass took 1368ms result 176kb 139kb
sass took 1467ms result 176kb 139kb

Output

The unminified CSS difference primarily in the indentation. But you minify both outputs and the pretty print them (with prettier) you get the following difference:


▶ diff /tmp/sass.min.css.pretty /tmp/node-sass.min.css.pretty
152c152
<   letter-spacing: -0.0027777778rem;
---
>   letter-spacing: -0.00278rem;
231c231
<   content: "▼︎";
---
>   content: "\25BC\FE0E";

...snip...


2804c2812
< .external-icon:not([href^="https://mdn.mozillademos.org"]):not(.ignore-external) {
---
> .external-icon:not([href^='https://mdn.mozillademos.org']):not(.ignore-external) {

Basically, sass will use produce things like letter-spacing: -0.0027777778rem; and content: "▼︎";. And node-sass will produce letter-spacing: -0.00278rem; and content: "\25BC\FE0E";.
I also noticed some minor difference just in the order of some selectors but when I look more carefully, they're immaterial order differences meaning they're not cascading each other in any way.

Note! I don't know why the use of ' and " is different or if it matters. I don't know know why prettier (version 2.1.1) didn't pick one over the other consistently.

node_modules

Here's how I created two projects to compare


cd /tmp
mkdir just-sass && cd just-sass && yarn init -y && time yarn add sass && cd ..
mkdir just-node-sass && cd just-node-sass && yarn init -y && time yarn add node-sass && cd ..

Considering that sass is just a JavaScript compilation of a Dart program, all you get is basically a 3.6MB node_modules/sass/sass.dart.js file.

The /tmp/just-sass/node_modules directory is only 113 files and folders weighing a total of 4.1MB.
Whereas /tmp/just-node-sass/node_modules directory is 3,658 files and folders weighing a total of 15.2MB.

I don't know about you but I'm very skeptical that node-gyp ever works. Who even has Python 2.7 installed anymore? Being able to avoid node-gyp seems like a win for sass.

Conclusion

The speed difference may or may not matter. If you're only doing it once, who cares about a couple of hundred milliseconds. But if you're forced to have to wait 1.4 seconds on every Ctrl-S when Webpack or whatever tooling you have starts up sass it might become very painful.

I don't know much about the sass-loader Webpack plugin but it apparently works with either but they do recommend sass in their documentation. And it's the default implementation too.

It's definitely a feather in sass's hat that Dart Sass is the "primary implementation" of Sass. That just has a nice feelin in sass's favor.

Bonus

NPMCompare has a nice comparison of them as projects but you have to study each row of numbers because it's rarely as simple as more (or less) number is better. For example, the number of open issues isn't a measure of bugs.

The new module system launched in October 2019 supposedly only comes to Dart Sass which means sass is definitely going to get it first. If that stuff matters to you. For example, true, the Sass unit-testing tool, now requires Dart Sass and drops support for node-sass.

Lazy-load Firebase Firestore and Firebase Authentication in Preact

September 2, 2020
0 comments Web development, Web Performance, JavaScript, Preact

I'm working on a Firebase app called That's Groce! based on preact-cli, with TypeScript, and I wanted to see how it appears with or without Firestore and Authenticated lazy-loaded.

In the root, there's an app.tsx that used look like this:


import { FunctionalComponent, h } from "preact";
import { useState, useEffect } from "preact/hooks";

import firebase from "firebase/app";
import "firebase/auth";
import "firebase/firestore";

import { firebaseConfig } from "./firebaseconfig";

const app = firebase.initializeApp(firebaseConfig);

const App: FunctionalComponent = () => {
  const [auth, setAuth] = useState<firebase.auth.Auth | null>(null);
  const [db, setDB] = useState<firebase.firestore.Firestore | null>(null);

  useEffect(() => {
    const appAuth = app.auth();
    setAuth(appAuth);
    appAuth.onAuthStateChanged(authStateChanged);

    const db = firebase.firestore();
    setDB(db);
  }, []);

...

While this works, it does make a really large bundle when both firebase/firestore and firebase/auth imported in the main bundle. In fact, it looks like this:

▶ ls -lh build/*.esm.js
-rw-r--r--  1 peterbe  staff   510K Sep  1 14:13 build/bundle.0438b.esm.js
-rw-r--r--  1 peterbe  staff   5.0K Sep  1 14:13 build/polyfills.532e0.esm.js

510K is pretty hefty to have to ask the client to download immediately. It's loaded like this (in build/index.html):


<script crossorigin="anonymous" src="/bundle.0438b.esm.js" type="module"></script>
<script nomodule src="/polyfills.694cb.js"></script>
<script nomodule defer="defer" src="/bundle.a4a8b.js"></script>

To lazy-load this

To lazy-load the firebase/firestore and firebase/auth you do this instead:


...

const App: FunctionalComponent = () => {
  const [auth, setAuth] = useState<firebase.auth.Auth | null>(null);
  const [db, setDB] = useState<firebase.firestore.Firestore | null>(null);

  useEffect(() => {
    import("firebase/auth")
      .then(() => {
        const appAuth = app.auth();
        setAuth(appAuth);
        appAuth.onAuthStateChanged(authStateChanged);
      })
      .catch((error) => {
        console.error("Unable to lazy-load firebase/auth:", error);
      });

    import("firebase/firestore")
      .then(() => {
        const db = firebase.firestore();
        setDB(db);
      })
      .catch((error) => {
        console.error("Unable to lazy-load firebase/firestore:", error);
      });
  }, []);

...

Now it looks like this instead:

▶ ls -lh build/*.esm.js
-rw-r--r--  1 peterbe  staff   173K Sep  1 14:24 build/11.chunk.b8684.esm.js
-rw-r--r--  1 peterbe  staff   282K Sep  1 14:24 build/12.chunk.3c1c4.esm.js
-rw-r--r--  1 peterbe  staff    56K Sep  1 14:24 build/bundle.7225c.esm.js
-rw-r--r--  1 peterbe  staff   5.0K Sep  1 14:24 build/polyfills.532e0.esm.js

The total sum of all (relevant) .esm.js files is the same (minus a difference of 430 bytes).

But what does it really look like? The app is already based around that


const [db, setDB] = useState<firebase.firestore.Firestore | null>(null);

so it knows to wait until db is truthy and it displays a <Loading/> component until it's ready.

To test how it loads I used the Chrome Performance devtools with or without the lazy-loading and it's fairly self-explanatory:

Before
Before, no lazy-loading

After
After, with lazy-loading

Clearly, the lazy-loaded has a nicer pattern in that it breaks up the work by the main thread.

Conclusion

It's fairly simple to do and it works. The main bundle becomes lighter and allows the browser to start rendering the Preact component sooner. But it's not entirely obvious if it's that much better. The same amount of JavaScript needs to downloaded and parsed no matter what. It's clearly working as a pattern but it's still pretty hard to judge if it's worth it. Now there's more "swapping".

And the whole page is server-side rendered anyway so in terms of immediate first-render it's probably the same. Hopefully, HTTP2 loading does the right thing but it's not yet entirely clear if the complete benefit is there. I certainly hope that this can improve the "Total Blocking Time" and "Time to Interactive".

The other important thing is that not all imports from firebase/* work in Node because they depend on window. It works for firebase/firestore and firestore/auth but not for firestore/analytics and firestore/performance. Now, I can add those lazy-loaded in the client and have the page rendered in Node for that initial build/index.html.

Test if two URLs are "equal" in JavaScript

July 2, 2020
3 comments JavaScript

This saved my bacon today and I quite like it so I hope that others might benefit from this little tip.

So you have two "URLs" and you want to know if they are "equal". I write those words, in the last sentence, in quotation marks because they might not be fully formed URLs and what you consider equal might depend on the current business logic.

In my case, I wanted http://www.peterbe.com/path/to?a=b to be considered equal to/path/to#anchor. Because, in this case the both share the exact same pathname (/path/to). So how to do it:


function equalUrls(url1, url2) {
  return (
    new URL(url1, "http://example.com").pathname ===
    new URL(url2, "http://example.com").pathname
  );
}

Truncated! Read the rest by clicking the link below.

findMatchesInText - Find line and column of matches in a text, in JavaScript

June 22, 2020
0 comments Node, JavaScript

I need this function to relate to open-editor which is a Node program that can open your $EDITOR from Node and jump to a specific file, to a specific line, to a specific column.

Here's the code:


function* findMatchesInText(needle, haystack, { inQuotes = false } = {}) {
  const escaped = needle.replace(/[.*+?^${}()|[\]\\]/g, "\\$&");
  let rex;
  if (inQuotes) {
    rex = new RegExp(`['"](${escaped})['"]`, "g");
  } else {
    rex = new RegExp(`(${escaped})`, "g");
  }
  for (const match of haystack.matchAll(rex)) {
    const left = haystack.slice(0, match.index);
    const line = (left.match(/\n/g) || []).length + 1;
    const lastIndexOf = left.lastIndexOf("\n") + 1;
    const column = match.index - lastIndexOf + 1;
    yield { line, column };
  }
}

And you use it like this:


const text = ` bravo
Abra
cadabra

bravo
`;

console.log(Array.from(findMatchesInText("bra", text)));

Which prints:


[
  { line: 1, column: 2 },
  { line: 2, column: 2 },
  { line: 3, column: 5 },
  { line: 5, column: 1 }
]

The inQuotes option is because a lot of times this function is going to be used for finding the href value in unstructured documents that contain HTML <a> tags.

hashin 0.15.0 now copes nicely with under_scores

June 15, 2020
0 comments Python

tl;dr hashin 0.15.0 makes package comparison agnostic to underscore or hyphens

See issue #116 for a fuller story. Basically, now it doesn't matter if you write...

hashin python_memcached

...or...

hashin python-memcached

And the same can be said about the contents of your requirements.txt file. Suppose it already had something like this:

python_memcached==1.59 \
    --hash=sha256:4dac64916871bd35502 \
    --hash=sha256:a2e28637be13ee0bf1a8

and you type hashin python-memcached it will do the version comparison on these independent of the underscore or hyphen.

Thank @caphrim007 who implemented this for the benefit of Renovate.

./bin/huey-isnt-running.sh - A bash script to prevent lurking ghosts

June 10, 2020
0 comments Python, Linux, Bash

tl;dr; Here's a useful bash script to avoid starting something when its already running as a ghost process.

Huey is a great little Python library for doing background tasks. It's like Celery but much lighter, faster, and easier to understand.

What cost me almost an hour of hair-tearing debugging today was that I didn't realize that a huey daemon process had gotten stuck in the background with code that wasn't updating as I made changes to the tasks.py file in my project. I just couldn't understand what was going on.

The way I start my project is with honcho which is a Python Foreman clone. The Procfile looks something like this:


elasticsearch: cd /Users/peterbe/dev/PETERBECOM/elasticsearch-7.7.0 && ./bin/elasticsearch -q
web: ./bin/run.sh web
minimalcss: cd minimalcss && PORT=5000 yarn run start
huey: ./manage.py run_huey --flush-locks --huey-verbose
adminui: cd adminui && yarn start
pulse: cd pulse && yarn run dev

And you start that with simply typing:


honcho start

When you Ctrl-C, it kills all those processes but somehow somewhere it doesn't always kill everything. Restarting the computer isn't a fun alternative.

So, to prevent my sanity from draining I wrote this script:


#!/usr/bin/env bash
set -eo pipefail

# This is used to make sure that before you start huey, 
# there isn't already one running the background.
# It has happened that huey gets lingering stuck as a 
# ghost and it's hard to notice it sitting there 
# lurking and being weird.

bad() {
    echo "Huey is already running!"
    exit 1
}

good() {
    echo "Huey is NOT already running"
    exit 0
}

ps aux | rg huey | rg -v 'rg huey' | rg -v 'huey-isnt-running.sh' && bad || good

(If you're wondering what rg is; it's short for ripgrep)

And I change my Procfile accordingly:


-huey: ./manage.py run_huey --flush-locks --huey-verbose
+huey: ./bin/huey-isnt-running.sh && ./manage.py run_huey --flush-locks --huey-verbose

There really isn't much rocket science or brain surgery about this blog post but I hope it inspires someone who's been in similar trenches that a simple bash script can make all the difference.

Check your email addresses in Python, as a whole

May 22, 2020
0 comments Python, MDN

So recently, in MDN, we changed the setting WELCOME_EMAIL_FROM. Seems harmless right? Wrong, it failed horribly in runtime and we didn't notice until it was in production. Here's the traceback:

SMTPSenderRefused: (552, b"5.1.7 The sender's address was syntactically invalid.\n5.1.7 see : http://support.socketlabs.com/kb/84 for more information.", '=?utf-8?q?Janet?=')
(8 additional frame(s) were not displayed)
...
  File "newrelic/api/function_trace.py", line 151, in literal_wrapper
    return wrapped(*args, **kwargs)
  File "django/core/mail/message.py", line 291, in send
    return self.get_connection(fail_silently).send_messages([self])
  File "django/core/mail/backends/smtp.py", line 110, in send_messages
    sent = self._send(message)
  File "django/core/mail/backends/smtp.py", line 126, in _send
    self.connection.sendmail(from_email, recipients, message.as_bytes(linesep='\r\n'))
  File "python3.8/smtplib.py", line 871, in sendmail
    raise SMTPSenderRefused(code, resp, from_addr)

SMTPSenderRefused: (552, b"5.1.7 The sender's address was syntactically invalid.\n5.1.7 see : http://support.socketlabs.com/kb/84 for more information.", '=?utf-8?q?Janet?=')

Yikes!

So, to prevent this from happening every again we're putting this check in:


from email.utils import parseaddr

WELCOME_EMAIL_FROM = config("WELCOME_EMAIL_FROM", ...)

# If this fails, SMTP will probably also fail.
assert parseaddr(WELCOME_EMAIL_FROM)[1].count('@') == 1, parseaddr(WELCOME_EMAIL_FROM)

You could go to town even more on this. Perhaps use the email validator within django but for now I'd call that overkill. This is just a decent check before anything gets a chance to go wrong.