10 years a Mozillian, always a Mozillian

August 30, 2021
2 comments Web development, Mozilla, MDN

As of September 2021, I am leaving Mozilla after 10 years. It hasn't been perfect but it's been a wonderful time with fond memories and an amazing career rocket ship.

In April 2011, I joined as a web developer to work on internal web applications that support the Firefox development engineering. In rough order, I worked on...

  • Elmo: The web application for managing the state of Firefox localization
  • Socorro: When Firefox crashes and asks to send a crash dump, this is the storage plus website for analyzing that
  • Peekaboo: When people come to visit a Mozilla office, they sign in on a tablet at the reception desk
  • Balrog: For managing what versions are available for Firefox products to query when it's time to self-upgrade
  • Air Mozilla: For watching live streams and video archive of all recordings within the company
  • MozTrap: When QA engineers need to track what, and the results, of QA testing Firefox products
  • Symbol Server: Where all C++ debug symbols are stored from the build pipeline to be used to source-map crash stack traces
  • Buildhub: To get a complete database of all and every individual build shipped of Firefox products
  • Remote Settings: Managing experiments and for Firefox to "phone home" for smaller updates/experiments between releases
  • MDN Web Docs: Where web developers go to look up all the latest and most detailed details about web APIs

This is an incomplete list because at Mozilla you get to help each other and I shipped a lot of smaller projects too, such as Contribute.json, Whatsdeployed, GitHub PR Triage, Bugzilla GitHub Bug Linker.

Reflecting back, the highlight of any project is when you get to meet or interact with the people you help. Few things are as rewarding as when someone you don't know, in person, finds out what you do and they say: "Are you Peter?! The one who built XYZ? I love that stuff! We use it all the time now in my team. Thank you!" It's not a brag because oftentimes what you build for fellow humans it isn't engineering'ly brilliant in any way. It's just something that someone needed. Perhaps the lesson learned is the importance of not celebrating what you've built but just put you into the same room as who uses what you built. And, in fact, if what you've built for someone else isn't particularly loved, by meeting and fully interactive with the people who use "your stuff" gives you the best of feedback and who doesn't love constructive criticism so you can become empowered to build better stuff.

Mozilla is a great company. There is no doubt in my mind. We ship high-quality products and we do it with pride. There have definitely been some rough patches over the years but that happens and you just have to carry on and try to focus on delivering value. Firefox Nightly will continue to be my default browser and I'll happily click any Google search ads to help every now and then. THANK YOU everyone I've ever worked with at Mozilla! You are a wonderful bunch of people!

How to get all of MDN Web Docs running locally

June 9, 2021
1 comment Web development, MDN

tl;dr; git clone https://github.com/mdn/content.git && cd content && yarn install && yarn start && open http://localhost:5000/ will get you all of MDN Web Docs running on your laptop.

The MDN Web Docs is built from a git repository: github.com/mdn/content. It contains all you need to get all the content running locally. Including search. Embedded inside that repository is a package.json which helps you start a Yari server. Aka. the preview server. It's a static build of the github.com/mdn/yari project which handles client-side rendering, search, an just-in-time server-side rendering server.

Basics

All you need is the following:

▶ git clone https://github.com/mdn/content.git
▶ cd content
▶ yarn install
▶ yarn start

And now open http://localhost:5000 in your browser.

This will now run in "preview server" mode. It's meant for contributors (and core writers) to use when they're working on a git branch. Because of that, you'll see a "Writer's homepage" at the root URL. And when viewing each document, you get buttons about "flaws" and stuff. Looks like this:

Preview server

Alternative ways to download

If you don't want to use git clone you can download the ZIP file. For example:

▶ wget https://github.com/mdn/content/archive/refs/heads/main.zip
▶ unzip main.zip
▶ cd content-main
▶ yarn install
▶ yarn start

At the time of writing, the downloaded Zip file is 86MB and unzipped the directory is 278MB on disk.

When you use git clone, by default it will download all the git history. That can actually be useful. This way, when rendering each document, it can figure out from the git logs when each individual document was last modified. For example:

"Last modified"

If you don't care about the "Last modified" date, you can do a "shallow git clone" instead. Replace the above-mentioned first command with:

▶ git clone --depth 1 https://github.com/mdn/content.git

At the time of writing the shallow cloned content folder becomes 234MB instead of (the deep clone) 302MB.

Just the raw rendered data

Every MDN Web Docs page has an index.json equivalent. Take any MDN page and add /index.json to the URL. For example /en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/slice/index.json

Essentially, this is the intermediate state that's used for server-side rendering the page. A glorified way of sandwiching the content in a header, a footer, and a sidebar to the side. These URLs work on localhost:5000 too. Try http://localhost:5000/en-US/docs/Web/API/Fetch_API/Using_Fetch/index.json for example.

The content for that index.json is built just in time. It also contains a bunch of extra metadata about "flaws"; a system used to highlight things that should be fixed that is somewhat easy to automate. So, it doesn't contain things like spelling mistakes or code snippets that are actually invalid.

But suppose you want all that raw (rendered) data, without any of the flaw detections, you can run this command:

▶ BUILD_FLAW_LEVELS="*:ignore" yarn build

It'll take a while (because it produces an index.html file too). But now you have all the index.json files for everything in the newly created ./build/ directory. It should have created a lot of files:

▶ find build -name index.json | wc -l
   11649

If you just want a subtree of files you could have run it like this instead:

▶ BUILD_FOLDERSEARCH=web/javascript BUILD_FLAW_LEVELS="*:ignore" yarn build

Programmatic API access

The programmatic APIs are all about finding the source files. But you can use the sources to turn that into the built files you might need. Or just to get a list of URLs. To get started, create a file called find-files.js in the root:


const { Document } = require("@mdn/yari/content");

console.log(Document.findAll().count);

Now, run it like this:

▶ export CONTENT_ROOT=files

▶ node find-files.js
11649

Other things you can do with that findAll function:


const { Document } = require("@mdn/yari/content");

const found = Document.findAll({
  folderSearch: "web/javascript/reference/statements/f",
});
for (const document of found.iter()) {
  console.log(document.url);
}

Or, suppose you want to actually build each of these that you find:


const { Document } = require("@mdn/yari/content");
const { buildDocument } = require("@mdn/yari/build");

const found = Document.findAll({
  folderSearch: "web/javascript/reference/statements/f",
});

Promise.all([...found.iter()].map((document) => buildDocument(document))).then(
  (built) => {
    for (const { doc } of built) {
      console.log(doc.title.padEnd(20), doc.popularity);
    }
  }
);

That'll output something like this:

▶ node find-files.js
for                  0.0143
for await...of       0.0129
for...in             0.0748
for...of             0.0531
function declaration 0.0088
function*            0.0122

All the HTML content in production-grade mode

In the most basic form, it will start the "preview server" which is tailored towards building just in time and has all those buttons at the top for writers/contributors. If you want the more "production-grade" version, you can't use the copy of @mdn/yari that is "included" in the mdn/content repo. To do this, you need to git clone mdn/yari and install that. Hang on, this is about to get a bit more advanced:

▶ git clone https://github.com/mdn/yari.git
▶ cd yari
▶ yarn install
▶ yarn build:client
▶ yarn build:ssr
▶ CONTENT_ROOT=../files REACT_APP_DISABLE_AUTH=true BUILD_FLAW_LEVELS="*:ignore" yarn build
▶ CONTENT_ROOT=../files node server/static.js

Now, if you go to something like http://localhost:5000/en-US/docs/Web/Guide/ you'll get the same thing as you get on https://developer.mozilla.org but all on your laptop. Should be pretty snappy.

Is it really entirely offline?

No, it leaks a little. For example, there are interactive examples that uses an iframe that's hardcoded to https://interactive-examples.mdn.mozilla.net/.

There are also external images for example. You might get a live sample that refers to sample images on https://mdn.mozillademos.org/files/.... So that'll fail if you're without WiFi in a spaceship.

Conclusion

Making all of MDN Web Docs available offline is, honestly, not a priority. The focus is on A) a secure production build, and B) a good environment for previewing content changes. But all the pieces are there. Search is a little bit tricky, as an example. When you're running it as a preview server you can't do a full-text search on all the content, but you get a useful autocomplete search widget for navigating between different titles. And the full-text search engine is a remote centralized server that you can't take with you offline.

But all the pieces are there. Somehow. It all depends on your use case and what you're willing to "compromise" on.

The correct way to index data into Elasticsearch with (Python) elasticsearch-dsl

May 14, 2021
0 comments Python, MDN, Elasticsearch

This is how MDN Web Docs uses Elasticsearch. Daily, we build all the content and then upload it all using elasticsearch-dsl using aliases. Because there are no good complete guides to do this, I thought I'd write it down for the next person who needs to do something similar. Let's jump straight into the code. The reader will need a healthy dose of imagination to fill in their details.

Indexing


# models.py

from datetime.datetime import utcnow

from elasticsearch_dsl import Document

PREFIX = "myprefix"


class MyDocument(Document):
    title = Text()
    body = Text()
    # ...

    class Index:
        name = (
            f'{PREFIX}_{utcnow().strftime("%Y%m%d%H%M%S")}'
        )

What's important to note here is that the MyDocument.Index.name is dynamically allocated every single time the module is imported. It's not very important exactly what it is called but it's important that it becomes unique each time.
This means that when you start using MyDocument it will automatically figure out which index to use. Now, it's time to create the index and bulk publish it.


# index.py
# Note! This example code skips over things like progress bars
# and verbose logging and misc sanity checks and stuff.

from elasticsearch.helpers import parallel_bulk
from elasticsearch_dsl import Index
from elasticsearch_dsl.connections import connections

from .models import MyDocument, PREFIX


def index(buildroot: Path, url: str, update=False):
    """
    * 'buildroot' is where the files are we're going to read and index
    * 'url' is the host URL for the Elasticsearch server
    * 'update' is if just want to "cake on" a couple of documents 
      instead of starting over and doing a complete indexing.
    """

    # Connect and stuff
    connections.create_connection(hosts=[url], retry_on_timeout=True)
    connection = connections.get_connection()
    health = connection.cluster.health()
    status = health["status"]
    if status not in ("green", "yellow"):
        raise Exception(f"status {status} not green or yellow")

    if update:
        for name in connection.indices.get_alias():
            if name.startswith(f"{PREFIX}_"):
                document_index = Index(name)
                break
        else:
            raise IndexAliasError(
                f"Unable to find an index called {PREFIX}_*"
            )

    else:
        # Confusingly, `._index` is actually not a private API.
        # It's the documented way you're supposed to reach it.
        document_index = MyDocument._index
        document_index.create()

    def generator():
        for doc in Path(buildroot):
            # The reason for specifying the exact index name is that we might
            # be doing an update and if you don't specify it, elasticsearch_dsl
            # will fall back to using whatever Document._meta.Index automatically
            # becomes in this moment.
            yield to_search(doc, _index=document_index._name).to_dict(True)

    for success, info in parallel_bulk(connection, generator()):
        # 'success' is a boolean
        # 'info' has stuff like:
        #  - info["index"]["error"]
        #  - info["index"]["_shards"]["successful"]
        #  - info["index"]["_shards"]["failed"]
        pass

    if update:
        # When you do an update, Elasticsearch will internally delete the
        # previous docs (based on the _id primary key we set).
        # Normally, Elasticsearch will do this when you restart the cluster
        # but that's not something we usually do.
        # See https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-forcemerge.html
        document_index.forcemerge()
    else:
        # Now we're going to bundle the change to set the alias to point
        # to the new index and delete all old indexes.
        # The reason for doing this together in one update is to make it atomic.
        alias_updates = [
            {"add": {"index": document_index._name, "alias": PREFIX}}
        ]
        for index_name in connection.indices.get_alias():
            if index_name.startswith(f"{PREFIX}_"):
                if index_name != document_index._name:
                    alias_updates.append({"remove_index": {"index": index_name}})
        connection.indices.update_aliases({"actions": alias_updates})

    print("All done!")



def to_search(file: Path, _index=None):
    with open(file) as f:
        data = json.load(f)
    return MyDocument(
        _index=_index,
        _id=data["identifier"],
        title=data["title"],
        body=data["body"]
    )

A lot is left to the reader as an exercise to fill in but these are the most important operations. It demonstrates how you can

  1. Correctly create indexes
  2. Atomically create an alias and clean up old indexes (and aliases)
  3. How you can add to an existing index

After you've run this you'll see something like this:

$ curl http://localhost:9200/_cat/indices?v
...
health status index                   uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   myprefix_20210514141421 vulVt5EKRW2MNV47j403Mw   1   1      11629            0     28.7mb         28.7mb

$ curl http://localhost:9200/_cat/aliases?v
...
alias    index                   filter routing.index routing.search is_write_index
myprefix myprefix_20210514141421 -      -             -              -

Searching

When it comes to using the index, well, it depends on where your code for that is. For example, on MDN Web Docs, the code that searches the index is in an entirely different code-base. It's incidentally Python (and elasticsearch-dsl) in both places but other than that they have nothing in common. So for the searching, you need to manually make sure you write down the name of the index (or name of the alias if you prefer) into the code that searches. For example:


from elasticsearch_dsl import Search

def search(params):
    search_query = Search(index=settings.SEARCH_INDEX_NAME)

    # Do stuff to 'search_query' based on 'params'

    response = search_query.execute()   
    for hit in response:
        # ...

If you're within the same code that has that models.MyDocument in the first example code above, you can simply do things like this:


from elasticsearch_dsl import Index
from elasticsearch_dsl.connections import connections

from .models import PREFIX


def analyze(
    url: str,
    text: str,
    analyzer: str,
):
    connections.create_connection(hosts=[url])
    index = Index(PREFIX)
    analysis = index.analyze(body={"text": text, "analyzer": analyzer})
    # ...

What English stop words overlap with JavaScript reserved keywords?

May 7, 2021
2 comments JavaScript, MDN

The list of stop words in Elasticsearch is:

a, an, and, are, as, at, be, but, by, for, if, in, into, 
is, it, no, not, of, on, or, such, that, the, their, 
then, there, these, they, this, to, was, will, with

The list of JavaScript reserved keywords is:

abstract, arguments, await, boolean, break, byte, case, 
catch, char, class, const, continue, debugger, default, 
delete, do, double, else, enum, eval, export, extends, 
false, final, finally, float, for, function, goto, if, 
implements, import, in, instanceof, int, interface, let, 
long, native, new, null, package, private, protected, 
public, return, short, static, super, switch, synchronized, 
this, throw, throws, transient, true, try, typeof, var, 
void, volatile, while, with, yield

That means that the overlap is:

for, if, in, this, with

And the remainder of the English stop words is:

a, an, and, are, as, at, be, but, by, into, is, it, no, 
not, of, on, or, such, that, the, their, then, there, 
these, they, to, was, will

Why does this matter? It matters when you're writing a search engine on English text that is about JavaScript. Such as, MDN Web Docs. At the time of writing, you can search for this because there's a special case explicitly for that word. But you can't search for for which is unfortunate.

But there's more! I think we should consider certain prototype words to be considered "reserved" because they are important JavaScript words that should not be treated as stop words. For example...

My contribution to 2021 Earth Day: optimizing some bad favicons on MDN Web Docs

April 23, 2021
0 comments Web development, MDN

tl;dr; The old /favicon.ico was 15KB and due to bad caching was downloaded 24M times in the last month totaling ~350GB of server-to-client traffic which can almost all be avoided.

How to save the planet? Well, do something you can do, they say. Ok, what I can do is to reduce the amount of electricity consumed to browse the web. Mozilla MDN Web Docs, which I work on, has a lot of traffic from all over the world. In the last 30 days, we have roughly 70M pageviews across roughly 15M unique users.
A lot of these people come back to MDN more than once per month so good assets and good asset-caching matter.

I found out that somehow we had failed to optimize the /favicon.ico asset! It was 15,086 bytes when, with Optimage, I was quickly able to turn it down to 1,153 bytes. That's a 13x improvement! Here's what that looks like when zoomed in 4x:

Old and new favicon.ico

The next challenge was the Cache-Control. Our CDN is AWS Cloudfront and it respects whatever Cache-Control headers we set on the assets. Because favicon.ico doesn't have a unique hash in its name, the Cache-Control falls back to the default of 24 hours (max-age=86400) which isn't much. Especially for an asset that almost never changes and besides, if we do decide to change the image (but not the name) we'd have to wait a minimum of 24 hours until it's fully rolled out.

Another thing I did as part of this was to stop assuming the default URL of /favicon.ico and instead control it with the <link rel="shortcut icon" href="/favicon.323ad90c.ico" type="image/x-icon"> HTML meta tag. Now I can control the URL of the image that will be downloaded.

Our client-side code is based on create-react-app and it can't optimize the files in the client/public/ directory.
So I wrote a script that post-processes the files in client/build/. In particular, it looks through the index.html template and replaces...


<link rel="shortcut icon" href="/favicon.ico" type="image/x-icon">

...with...


<link rel="shortcut icon" href="/favicon.323ad90c.ico" type="image/x-icon">

Plus it makes a copy of the file with this hash in it so that the old URL still resolves. But now can cache it much more aggressively. 1 year in fact.

In summary

Combined, we used to have ~350GB worth of data sent from our CDN(s) to people's browsers every month.
Just changing the image itself would turn that number to ~25GB instead.
The new Cache-Control hopefully means that all those returning users can skip the download on a daily basis which will reduce the amount of network usage even more, but it's hard to predict in advance.

How MDN's site-search works

February 26, 2021
3 comments Web development, Django, Python, MDN, Elasticsearch

tl;dr; Periodically, the whole of MDN is built, by our Node code, in a GitHub Action. A Python script bulk-publishes this to Elasticsearch. Our Django server queries the same Elasticsearch via /api/v1/search. The site-search page is a static single-page app that sends XHR requests to the /api/v1/search endpoint. Search results' sort-order is determined by match and "popularity".

Jamstack'ing

The challenge with "Jamstack" websites is with data that is too vast and dynamic that it doesn't make sense to build statically. Search is one of those. For the record, as of Feb 2021, MDN consists of 11,619 documents (aka. articles) in English. Roughly another 40,000 translated documents. In English alone, there are 5.3 million words. So to build a good search experience we need to, as a static site build side-effect, index all of this in a full-text search database. And Elasticsearch is one such database and it's good. In particular, Elasticsearch is something MDN is already quite familiar with because it's what was used from within the Django app when MDN was a wiki.

Note: MDN gets about 20k site-searches per day from within the site.

Build

Diagram

When we build the whole site, it's a script that basically loops over all the raw content, applies macros and fixes, dumps one index.html (via React server-side rendering) and one index.json. The index.json contains all the fully rendered text (as HTML!) in blocks of "prose". It looks something like this:


{
  "doc": {
    "title": "DOCUMENT TITLE",
    "summary": "DOCUMENT SUMMARY",
    "body": [
      {
        "type": "prose", 
        "value": {
          "id": "introduction", 
          "title": "INTRODUCTION",
          "content": "<p>FIRST BLOCK OF TEXTS</p>"
       }
     },
     ...
   ],
   "popularity": 0.12345,
   ...
}

You can see one here: /en-US/docs/Web/index.json

Indexing

Next, after all the index.json files have been produced, a Python script takes over and it traverses all the index.json files and based on that structure it figures out the, title, summary, and the whole body (as HTML).

Next up, before sending this into the bulk-publisher in Elasticsearch it strips the HTML. It's a bit more than just turning <p>Some <em>cool</em> text.</p> to Some cool text. because it also cleans up things like <div class="hidden"> and certain <div class="notecard warning"> blocks.

One thing worth noting is that this whole thing runs roughly every 24 hours and then it builds everything. But what if, between two runs, a certain page has been removed (or moved), how do you remove what was previously added to Elasticsearch? The solution is simple: it deletes and re-creates the index from scratch every day. The whole bulk-publish takes a while so right after the index has been deleted, the searches won't be that great. Someone could be unlucky in that they're searching MDN a couple of seconds after the index was deleted and now waiting for it to build up again.
It's an unfortunate reality but it's a risk worth taking for the sake of simplicity. Also, most people are searching for things in English and specifically the Web/ tree so the bulk-publishing is done in a way the most popular content is bulk-published first and the rest was done after. Here's what the build output logs:

Found 50,461 (potential) documents to index
Deleting any possible existing index and creating a new one called mdn_docs
Took 3m 35s to index 50,362 documents. Approximately 234.1 docs/second
Counts per priority prefixes:
    en-us/docs/web                 9,056
    *rest*                         41,306

So, yes, for 3m 35s there's stuff missing from the index and some unlucky few will get fewer search results than they should. But we can optimize this in the future.

Searching

The way you connect to Elasticsearch is simply by a URL it looks something like this:

https://USER:PASSWD@HASH.us-west-2.aws.found.io:9243

It's an Elasticsearch cluster managed by Elastic running inside AWS. Our job is to make sure that we put the exact same URL in our GitHub Action ("the writer") as we put it into our Django server ("the reader").
In fact, we have 3 Elastic clusters: Prod, Stage, Dev.
And we have 2 Django servers: Prod, Stage.
So we just need to carefully make sure the secrets are set correctly to match the right environment.

Now, in the Django server, we just need to convert a request like GET /api/v1/search?q=foo&locale=fr (for example) to a query to send to Elasticsearch. We have a simple Django view function that validates the query string parameters, does some rate-limiting, creates a query (using elasticsearch-dsl) and packages the Elasticsearch results back to JSON.

How we make that query is important. In here lies the most important feature of the search; how it sorts results.

In one simple explanation, the sort order is a combination of popularity and "matchness". The assumption is that most people want the popular content. I.e. they search for foreach and mean to go to /en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/forEach not /en-US/docs/Web/API/NodeList/forEach both of which contains forEach in the title. The "popularity" is based on Google Analytics pageviews which we download periodically, normalize into a floating-point number between 1 and 0. At the of writing the scoring function does something like this:

rank = doc.popularity * 10 + search.score

This seems to produce pretty reasonable results.

But there's more to the "matchness" too. Elasticsearch has its own API for defining boosting and the way we apply is:

  • match phrase in the title: Boost = 10.0
  • match phrase in the body: Boost = 5.0
  • match in title: Boost = 2.0
  • match in body: Boost = 1.0

This is then applied on top of whatever else Elasticsearch does such as "Term Frequency" and "Inverse Document Frequency" (tf and if). This article is a helpful introduction.

We're most likely not done with this. There's probably a lot more we can do to tune this myriad of knobs and sliders to get the best possible ranking of documents that match.

Web UI

The last piece of the puzzle is how we display all of this to the user. The way it works is that developer.mozilla.org/$locale/search returns a static page that is blank. As soon as the page has loaded, it lazy-loads JavaScript that can actually issue the XHR request to get and display search results. The code looks something like this:


function SearchResults() {
  const [searchParams] = useSearchParams();
  const sp = createSearchParams(searchParams);
  // add defaults and stuff here
  const fetchURL = `/api/v1/search?${sp.toString()}`;

  const { data, error } = useSWR(
    fetchURL,
    async (url) => {
      const response = await fetch(URL);
      // various checks on the response.statusCode here
      return await response.json();
    }
  );

  // render 'data' or 'error' accordingly here

A lot of interesting details are omitted from this code snippet. You have to check it out for yourself to get a more up-to-date insight into how it actually works. But basically, the window.location (and pushState) query string drives the fetch() call and then all the component has to do is display the search results with some highlighting.

The /api/v1/search endpoint also runs a suggestion query as part of the main search query. This extracts out interest alternative search queries. These are filtered and scored and we issue "sub-queries" just to get a count for each. Now we can do one of those "Did you mean...". For example: search for intersections.

In conclusion

There are a lot of interesting, important, and careful details that are glossed over here in this blog post. It's a constantly evolving system and we're constantly trying to improve and perfect the system in a way that it fits what users expect.

A lot of people reach MDN via a Google search (e.g. mdn array foreach) but despite that, nearly 5% of all traffic on MDN is the site-search functionality. The /$locale/search?... endpoint is the most frequently viewed page of all of MDN. And having a good search engine that's reliable is nevertheless important. By owning and controlling the whole pipeline allows us to do specific things that are unique to MDN that other websites don't need. For example, we index a lot of raw HTML (e.g. <video>) and we have code snippets that needs to be searchable.

Hopefully, the MDN site-search will elevate from being known to be very limited to something now that can genuinely help people get to the exact page better than Google can. Yes, it's worth aiming high!

Check your email addresses in Python, as a whole

May 22, 2020
0 comments Python, MDN

So recently, in MDN, we changed the setting WELCOME_EMAIL_FROM. Seems harmless right? Wrong, it failed horribly in runtime and we didn't notice until it was in production. Here's the traceback:

SMTPSenderRefused: (552, b"5.1.7 The sender's address was syntactically invalid.\n5.1.7 see : http://support.socketlabs.com/kb/84 for more information.", '=?utf-8?q?Janet?=')
(8 additional frame(s) were not displayed)
...
  File "newrelic/api/function_trace.py", line 151, in literal_wrapper
    return wrapped(*args, **kwargs)
  File "django/core/mail/message.py", line 291, in send
    return self.get_connection(fail_silently).send_messages([self])
  File "django/core/mail/backends/smtp.py", line 110, in send_messages
    sent = self._send(message)
  File "django/core/mail/backends/smtp.py", line 126, in _send
    self.connection.sendmail(from_email, recipients, message.as_bytes(linesep='\r\n'))
  File "python3.8/smtplib.py", line 871, in sendmail
    raise SMTPSenderRefused(code, resp, from_addr)

SMTPSenderRefused: (552, b"5.1.7 The sender's address was syntactically invalid.\n5.1.7 see : http://support.socketlabs.com/kb/84 for more information.", '=?utf-8?q?Janet?=')

Yikes!

So, to prevent this from happening every again we're putting this check in:


from email.utils import parseaddr

WELCOME_EMAIL_FROM = config("WELCOME_EMAIL_FROM", ...)

# If this fails, SMTP will probably also fail.
assert parseaddr(WELCOME_EMAIL_FROM)[1].count('@') == 1, parseaddr(WELCOME_EMAIL_FROM)

You could go to town even more on this. Perhaps use the email validator within django but for now I'd call that overkill. This is just a decent check before anything gets a chance to go wrong.

MDN Documents Size Tree Map

November 14, 2019
0 comments MDN, Web development

Recently I've been playing with the content of MDN as a whole. MDN has ~140k documents in its Wiki. About ~70k of them are redirects which is the result of many years of switching tech and switching information architecture and at the same time being good Internet citizens and avoiding 404s. So, out of the ~70k documents, how do they spread? To answer that I wrote a Python script that evaluates size as a matter of the sum of all the files in sub-trees including pictures.

Here are the screenshots:

All locales

All locales

Specifically en-US

Specifically en-US

The code that puts this together uses Toast UI which seems cool but I didn't spend much time worrying about how to use it.

Be warned! Opening this link will make your browser sweat: https://8mw9v.csb.app/

You can fork it here: https://codesandbox.io/s/zen-swirles-8mw9v

Previous page
Next page