Filtered by Python

Page 2

Reset

Pip-Outdated.py - a script to compare requirements.in with the output of pip list --outdated

December 22, 2022
0 comments Python

Simply by posting this, there's a big chance you'll say "Hey! Didn't you know there's already a well-known script that does this? Better." Or you'll say "Hey! That'll save me hundreds of seconds per year!"

The problem

Suppose you have a requirements.in file that is used, by pip-compile to generate the requirements.txt that you actually install in your Dockerfile or whatever server deployment. The requirements.in is meant to be the human-readable file and the requirements.txt is for the computers. You manually edit the version numbers in the requirements.in and then run pip-compile --generate-hashes requirements.in to generate a new requirements.txt. But the "first-class" packages in the requirements.in aren't the only packages that get installed. For example:

▶ cat requirements.in | rg '==' | wc -l
      54

▶ cat requirements.txt | rg '==' | wc -l
     102

In other words, in this particular example, there are 76 "second-class" packages that get installed. There might actually be more stuff installed that you didn't describe. That's why pip list | wc -l can be even higher. For example, you might have locally and manually done pip install ipython for a nicer interactive prompt.

The solution

The command pip list --outdated will list packages based on the requirements.txt not the requirements.in. To mitigate that, I wrote a quick Python CLI script that combines the output of pip list --outdated with the packages mentioned in requirements.in:


#!/usr/bin/env python

import subprocess


def main(*args):
    if not args:
        requirements_in = "requirements.in"
    else:
        requirements_in = args[0]
    required = {}
    with open(requirements_in) as f:
        for line in f:
            if "==" in line:
                package, version = line.strip().split("==")
                package = package.split("[")[0]
                required[package] = version

    res = subprocess.run(["pip", "list", "--outdated"], capture_output=True)
    if res.returncode:
        raise Exception(res.stderr)

    lines = res.stdout.decode("utf-8").splitlines()
    relevant = [line for line in lines if line.split()[0] in required]

    longest_package_name = max([len(x.split()[0]) for x in relevant]) if relevant else 0

    for line in relevant:
        p, installed, possible, *_ = line.split()
        if p in required:
            print(
                p.ljust(longest_package_name + 2),
                "INSTALLED:",
                installed.ljust(9),
                "POSSIBLE:",
                possible,
            )


if __name__ == "__main__":
    import sys

    sys.exit(main(*sys.argv[1:]))

Installation

To install this, you can just download the script and run it in any directory that contains a requirements.in file.

Or you can install it like this:

curl -L https://gist.github.com/peterbe/099ad364657b70a04b1d65aa29087df7/raw/23fb1963b35a2559a8b24058a0a014893c4e7199/Pip-Outdated.py > ~/bin/Pip-Outdated.py
chmod +x ~/bin/Pip-Outdated.py

Pip-Outdated.py

Join a list with a bitwise or operator in Python

August 22, 2022
0 comments Python

The bitwise OR operator in Python is often convenient when you want to combine multiple things into one thing. For example, with the Django ORM you might do this:


from django.db.models import Q

filter_ = Q(first_name__icontains="peter") | Q(first_name__icontains="ashley")

for contact in Contact.objects.filter(filter_):
    print((contact.first_name, contact.last_name))

See how it hardcodes the filtering on strings peter and ashley.
But what if that was a bit more complicated:


from django.db.models import Q

filter_ = Q(first_name__icontains="peter")
if include("ashley"):
    filter_ | = Q(first_name__icontains="ashley")

for contact in Contact.objects.filter(filter_):
    print((contact.first_name, contact.last_name))

So far, same functionality.

But what if the business logic is more complicated? You can't do this:


filter_ = None
if include("peter"):
    filter_ | = Q(first_name__icontains="peter")  # WILL NOT WORK
if include("ashley"):
    filter_ | = Q(first_name__icontains="ashley")

for contact in Contact.objects.filter(filter_):
    print((contact.first_name, contact.last_name))

What if the list of things you want to filter on depends on a list? You'd need to do the |= stuff "dynamically". One way to solve that is with functools.reduce. Suppose the list of things you want to bitwise-OR together is a list:


from django.db.models import Q
from operator import or_
from functools import reduce


def include(_):
    import random
    return random.random() > 0.5

filters = []
if include("peter"):
    filters.append(Q(first_name__icontains="peter"))
if include("ashley"):
    filters.append(Q(first_name__icontains="ashley"))

assert len(filters), "must have at least one filter"
filter_ = reduce(or_, filters)  # THE MAGIC!

for contact in Contact.objects.filter(filter_):
    print((contact.first_name, contact.last_name))

And finally, if it's a list already:


from django.db.models import Q
from operator import or_
from functools import reduce

names = ["peter", "ashley"]
qs = [Q(first_name__icontains=x) for x in names]
filter_ = reduce(or_, qs)

for contact in Contact.objects.filter(filter_):
    print((contact.first_name, contact.last_name))

Side note

Django's django.db.models.Q is actually quite flexible with used with MyModel.objects.filter(...) because this actually works:


from django.db.models import Q

def include(_):
    import random
    return random.random() > 0.5

filter_ = Q()  # MAGIC SAUCE
if include("peter"):
    filter_ |= Q(first_name__icontains="peter")
if include("ashley"):
    filter_ |= Q(first_name__icontains="ashley")

for contact in Contact.objects.filter(filter_):
    print((contact.first_name, contact.last_name))

How to sort case insensitively with empty strings last in Django

April 3, 2022
1 comment Django, Python, PostgreSQL

Imagine you have something like this in Django:


class MyModel(models.Models):
    last_name = models.CharField(max_length=255, blank=True)
    ...

The most basic sorting is either: queryset.order_by('last_name') or queryset.order_by('-last_name'). But what if you want entries with a blank string last? And, you want it to be case insensitive. Here's how you do it:


from django.db.models.functions import Lower, NullIf
from django.db.models import Value


if reverse:
    order_by = Lower("last_name").desc()
else:
    order_by = Lower(NullIf("last_name", Value("")), nulls_last=True)


ALL = list(queryset.values_list("last_name", flat=True))
print("FIRST 5:", ALL[:5])
# Will print either...
#   FIRST 5: ['Zuniga', 'Zukauskas', 'Zuccala', 'Zoller', 'ZM']
# or 
#   FIRST 5: ['A', 'aaa', 'Abrams', 'Abro', 'Absher']
print("LAST 5:", ALL[-5:])
# Will print...
#   LAST 5: ['', '', '', '', '']

This is only tested with PostgreSQL but it works nicely.
If you're curious about what the SQL becomes, it's:


SELECT "main_contact"."last_name" FROM "main_contact" 
ORDER BY LOWER(NULLIF("main_contact"."last_name", '')) ASC

or


SELECT "main_contact"."last_name" FROM "main_contact" 
ORDER BY LOWER("main_contact"."last_name") DESC

Note that if your table columns is either a string, an empty string, or null, the reverse needs to be: Lower("last_name", nulls_last=True).desc().

How to close a HTTP GET request in Python before the end

March 30, 2022
0 comments Python

Does you server barf if your clients close the connection before it's fully downloaded? Well, there's an easy way to find out. You can use this Python script:


import sys
import requests

url = sys.argv[1]
assert '://' in url, url
r = requests.get(url, stream=True)
if r.encoding is None:
    r.encoding = 'utf-8'
for chunk in r.iter_content(1024, decode_unicode=True):
    break

I use the xh CLI tool a lot. It's like curl but better in some things. By default, if you use --headers it will make a regular GET request but close the connection as soon as it has gotten all the headers. E.g.

▶ xh --headers https://www.peterbe.com
HTTP/2.0 200 OK
cache-control: public,max-age=3600
content-type: text/html; charset=utf-8
date: Wed, 30 Mar 2022 12:37:09 GMT
etag: "3f336-Rohm58s5+atf5Qvr04kmrx44iFs"
server: keycdn-engine
strict-transport-security: max-age=63072000; includeSubdomains; preload
vary: Accept-Encoding
x-cache: HIT
x-content-type-options: nosniff
x-edge-location: usat
x-frame-options: SAMEORIGIN
x-middleware-cache: hit
x-powered-by: Express
x-shield: active
x-xss-protection: 1; mode=block

That's not be confused with doing HEAD like curl -I ....

So either with xh or the Python script above, you can get that same effect. It's a useful trick when you want to make sure your (async) server doesn't attempt to do weird stuff with the "Response" object after the connection has closed.

How to string pad a string in Python with a variable

October 19, 2021
1 comment Python

I just have to write this down because that's the rule; if I find myself googling something basic like this more than once, it's worth blogging about.

Suppose you have a string and you want to pad with empty spaces. You have 2 options:


>>> s = "peter"
>>> s.ljust(10)
'peter     '
>>> f"{s:<10}"
'peter     '

The f-string notation is often more convenient because it can be combined with other formatting directives.
But, suppose the number 10 isn't hardcoded like that. Suppose it's a variable:


>>> s = "peter"
>>> width = 11
>>> s.ljust(width)
'peter      '
>>> f"{s:<width}"
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ValueError: Invalid format specifier

Well, the way you need to do it with f-string formatting, when it's a variable like that is this syntax:


>>> f"{s:<{width}}"
'peter      '

TypeScript function keyword arguments like Python

September 8, 2021
0 comments Python, JavaScript

To do this in Python:


def print_person(name="peter", dob=1979):
    print(f"name={name}\tdob={dob}")


print_person() 
# prints: name=peter dob=1979

print_person(name="Tucker")
# prints: name=Tucker    dob=1979

print_person(dob=2013)
# prints: name=peter dob=2013

print_person(sex="boy")
# TypeError: print_person() got an unexpected keyword argument 'sex'

...in TypeScript:


function printPerson({
  name = "peter",
  dob = 1979
}: { name?: string; dob?: number } = {}) {
  console.log(`name=${name}\tdob=${dob}`);
}

printPerson();
// prints: name=peter    dob=1979

printPerson({});
// prints: name=peter    dob=1979

printPerson({ name: "Tucker" });
// prints: name=Tucker   dob=1979

printPerson({ dob: 2013 });
// prints: name=peter    dob=2013


printPerson({ gender: "boy" })
// Error: Object literal may only specify known properties, and 'gender' 

Here's a Playground copy of it.

It's not a perfect "transpose" across the two languages but it's sufficiently similar.
The trick is that last = {} at the end of the function signature in TypeScript which makes it possible to omit keys in the passed-in object.

By the way, the pure JavaScript version of this is:


function printPerson({ name = "peter", dob = 1979 } = {}) {
  console.log(`name=${name}\tdob=${dob}`);
}

But, unlike Python and TypeScript, you get no warnings or errors if you'd do printPerson({ gender: "boy" }); with the JavaScript version.

How to install Python Poetry in GitHub Actions in MUCH faster way

July 27, 2021
0 comments Python

We use Poetry in a GitHub project. There's a pyproject.toml file (and a poetry.lock file) which with the help of the executable poetry gets you a very reliable Python environment. The only problem is that adding the poetry executable is slow. Like 10+ seconds slow. It might seem silly but in the project I'm working on, that 10+s delay is the slowest part of a GitHub Action workflow which needs to be fast because it's trying to post a comment on a pull request as soon as it possibly can.

Installing poetry being the slowest partt
First I tried caching $(pip cache dir) so that the underlying python -v pip install virtualenv -t $tmp_dir that install-poetry.py does would get a boost from avoiding network. The difference was negligible. I also didn't want to get too weird by overriding how the install-poetry.py works or even make my own hacky copy. I like being able to just rely on the snok/install-poetry GitHub Action to do its thing (and its future thing).

The solution was to cache the whole $HOME/.local directory. It's as simple as this:


- name: Load cached $HOME/.local
  uses: actions/cache@v2.1.6
  with:
    path: ~/.local
    key: dotlocal-${{ runner.os }}-${{ hashFiles('.github/workflows/pr-deployer.yml') }}

The key is important. If you do copy-n-paste this block of YAML to speed up your GitHub Action, please remember to replace .github/workflows/pr-deployer.yml with the name of your .yml file that uses this. It's important because otherwise, the cache might be overzealously hot when you make a change like:


       - name: Install Python poetry
-        uses: snok/install-poetry@v1.1.6
+        uses: snok/install-poetry@v1.1.7
         with:

...for example.

Now, thankfully install-poetry.py (which is the recommended way to install poetry by the way) can notice that it's already been created and so it can omit a bunch of work. The result of this is as follows:

A fast install poetry

From 10+ seconds to 2 seconds. And what's neat is that the optimization is very "unintrusive" because it doesn't mess with how the snok/install-poetry workflow works.

But wait, there's more!

If you dig up our code where we use poetry you might find that it does a bunch of other caching too. In particular, it caches .venv it creates too. That's relevant but ultimately unrelated. It basically caches the generated virtualenv from the poetry install command. It works like this:


- name: Load cached venv
  id: cached-poetry-dependencies
  uses: actions/cache@v2.1.6
  with:
    path: deployer/.venv
    key: venv-${{ runner.os }}-${{ hashFiles('**/poetry.lock') }}-${{ hashFiles('.github/workflows/pr-deployer.yml') }}

...

- name: Install deployer
  run: |
    cd deployer
    poetry install
  if: steps.cached-poetry-dependencies.outputs.cache-hit != 'true'

In this example, deployer is just the name of the directory, in the repository root, where we have all the Python code and the pyproject.toml etc. If you have yours at the root of the project you can just do: run: poetry install and in the caching step change it to: path: .venv.

Now, you get a really powerful complete caching strategy. When the caches are hot (i.e. no changes to the .yml, poetry.lock, or pyproject.toml files) you get the executable (so you can do poetry run ...) and all its dependencies in roughly 2 seconds. That'll be hard to beat!

An effective and immutable way to turn two Python lists into one

June 23, 2021
7 comments Python

tl;dr; To make 2 lists into 1 without mutating them use list1 + list2.

I'm blogging about this because today I accidentally complicated my own code. From now on, let's just focus on the right way.

Suppose you have something like this:


winners = [123, 503, 1001]
losers = [45, 812, 332]

combined = winners + losers

that will create a brand new list. To prove that it's immutable:

>>> combined.insert(0, 100)
>>> combined
[100, 123, 503, 1001, 45, 812, 332]
>>> winners
[123, 503, 1001]
>>> losers
[45, 812, 332]

What I originally did was:


winners = [123, 503, 1001]
losers = [45, 812, 332]

combined = [*winners, *losers]

This works the same and that syntax feels very JavaScript'y. E.g.

> var winners = [123, 503, 1001]
[ 123, 503, 1001 ]
> var losers = [45, 812, 332]
[ 45, 812, 332 ]
> var combined = [...winners, ...losers]
[ 123, 503, 1001, 45, 812, 332 ]
> combined.pop()
332
> losers
[ 45, 812, 332 ]

By the way, if you want to filter out duplicates, do this:


>>> a = [1, 2, 3]
>>> b = [2, 3, 4]
>>> list(dict.fromkeys(a + b))
[1, 2, 3, 4]

It's the most performant way to do it if the order is important.

And if you don't care about the order you can use this:

>>> a = [1, 2, 3]
>>> b = [2, 3, 4]
>>> list(set(a + b))
[1, 2, 3, 4]
>>> list(set(b + a))
[1, 2, 3, 4]

The correct way to index data into Elasticsearch with (Python) elasticsearch-dsl

May 14, 2021
0 comments Python, MDN, Elasticsearch

This is how MDN Web Docs uses Elasticsearch. Daily, we build all the content and then upload it all using elasticsearch-dsl using aliases. Because there are no good complete guides to do this, I thought I'd write it down for the next person who needs to do something similar. Let's jump straight into the code. The reader will need a healthy dose of imagination to fill in their details.

Indexing


# models.py

from datetime.datetime import utcnow

from elasticsearch_dsl import Document

PREFIX = "myprefix"


class MyDocument(Document):
    title = Text()
    body = Text()
    # ...

    class Index:
        name = (
            f'{PREFIX}_{utcnow().strftime("%Y%m%d%H%M%S")}'
        )

What's important to note here is that the MyDocument.Index.name is dynamically allocated every single time the module is imported. It's not very important exactly what it is called but it's important that it becomes unique each time.
This means that when you start using MyDocument it will automatically figure out which index to use. Now, it's time to create the index and bulk publish it.


# index.py
# Note! This example code skips over things like progress bars
# and verbose logging and misc sanity checks and stuff.

from elasticsearch.helpers import parallel_bulk
from elasticsearch_dsl import Index
from elasticsearch_dsl.connections import connections

from .models import MyDocument, PREFIX


def index(buildroot: Path, url: str, update=False):
    """
    * 'buildroot' is where the files are we're going to read and index
    * 'url' is the host URL for the Elasticsearch server
    * 'update' is if just want to "cake on" a couple of documents 
      instead of starting over and doing a complete indexing.
    """

    # Connect and stuff
    connections.create_connection(hosts=[url], retry_on_timeout=True)
    connection = connections.get_connection()
    health = connection.cluster.health()
    status = health["status"]
    if status not in ("green", "yellow"):
        raise Exception(f"status {status} not green or yellow")

    if update:
        for name in connection.indices.get_alias():
            if name.startswith(f"{PREFIX}_"):
                document_index = Index(name)
                break
        else:
            raise IndexAliasError(
                f"Unable to find an index called {PREFIX}_*"
            )

    else:
        # Confusingly, `._index` is actually not a private API.
        # It's the documented way you're supposed to reach it.
        document_index = MyDocument._index
        document_index.create()

    def generator():
        for doc in Path(buildroot):
            # The reason for specifying the exact index name is that we might
            # be doing an update and if you don't specify it, elasticsearch_dsl
            # will fall back to using whatever Document._meta.Index automatically
            # becomes in this moment.
            yield to_search(doc, _index=document_index._name).to_dict(True)

    for success, info in parallel_bulk(connection, generator()):
        # 'success' is a boolean
        # 'info' has stuff like:
        #  - info["index"]["error"]
        #  - info["index"]["_shards"]["successful"]
        #  - info["index"]["_shards"]["failed"]
        pass

    if update:
        # When you do an update, Elasticsearch will internally delete the
        # previous docs (based on the _id primary key we set).
        # Normally, Elasticsearch will do this when you restart the cluster
        # but that's not something we usually do.
        # See https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-forcemerge.html
        document_index.forcemerge()
    else:
        # Now we're going to bundle the change to set the alias to point
        # to the new index and delete all old indexes.
        # The reason for doing this together in one update is to make it atomic.
        alias_updates = [
            {"add": {"index": document_index._name, "alias": PREFIX}}
        ]
        for index_name in connection.indices.get_alias():
            if index_name.startswith(f"{PREFIX}_"):
                if index_name != document_index._name:
                    alias_updates.append({"remove_index": {"index": index_name}})
        connection.indices.update_aliases({"actions": alias_updates})

    print("All done!")



def to_search(file: Path, _index=None):
    with open(file) as f:
        data = json.load(f)
    return MyDocument(
        _index=_index,
        _id=data["identifier"],
        title=data["title"],
        body=data["body"]
    )

A lot is left to the reader as an exercise to fill in but these are the most important operations. It demonstrates how you can

  1. Correctly create indexes
  2. Atomically create an alias and clean up old indexes (and aliases)
  3. How you can add to an existing index

After you've run this you'll see something like this:

$ curl http://localhost:9200/_cat/indices?v
...
health status index                   uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   myprefix_20210514141421 vulVt5EKRW2MNV47j403Mw   1   1      11629            0     28.7mb         28.7mb

$ curl http://localhost:9200/_cat/aliases?v
...
alias    index                   filter routing.index routing.search is_write_index
myprefix myprefix_20210514141421 -      -             -              -

Searching

When it comes to using the index, well, it depends on where your code for that is. For example, on MDN Web Docs, the code that searches the index is in an entirely different code-base. It's incidentally Python (and elasticsearch-dsl) in both places but other than that they have nothing in common. So for the searching, you need to manually make sure you write down the name of the index (or name of the alias if you prefer) into the code that searches. For example:


from elasticsearch_dsl import Search

def search(params):
    search_query = Search(index=settings.SEARCH_INDEX_NAME)

    # Do stuff to 'search_query' based on 'params'

    response = search_query.execute()   
    for hit in response:
        # ...

If you're within the same code that has that models.MyDocument in the first example code above, you can simply do things like this:


from elasticsearch_dsl import Index
from elasticsearch_dsl.connections import connections

from .models import PREFIX


def analyze(
    url: str,
    text: str,
    analyzer: str,
):
    connections.create_connection(hosts=[url])
    index = Index(PREFIX)
    analysis = index.analyze(body={"text": text, "analyzer": analyzer})
    # ...

Umlauts (non-ascii characters) with git on macOS

March 22, 2021
0 comments Python, macOS

I edit a file called files/en-us/glossary/bézier_curve/index.html and then type git status and I get this:

▶ git status
...
Changes not staged for commit:
  ...
    modified:   "files/en-us/glossary/b\303\251zier_curve/index.html"

...

What's that?! First of all, I actually had this wrapped in a Python script that uses GitPython to analyze the output of for change in repo.index.diff(None):. So I got...

FileNotFoundError: [Errno 2] No such file or directory: '"files/en-us/glossary/b\\303\\251zier_curve/index.html"'

What's that?!

At first, I thought it was something wrong with how I use GitPython and thought I could force some sort of conversion to UTF-8 with Python. That, and to strip the quotation parts with something like path = path[1:-1] if path.startwith('"') else path

After much googling and experimentation, what totally solved all my problems was to run:

▶ git config --global core.quotePath false

Now you get...:

▶ git status
...
Changes not staged for commit:
  ...
    modified:   files/en-us/glossary/bézier_curve/index.html

...

And that also means it works perfectly fine with any GitPython code that does something with the repo.index.diff(None) or repo.index.diff(repo.head.commit).

Also, we I use the git-diff-action GitHub Action which would fail to spot files that contained umlauts but now I run this:


    steps:
       - uses: actions/checkout@v2
+
+      - name: Config git core.quotePath
+        run: git config --global core.quotePath false
+
       - uses: technote-space/get-diff-action@v4.0.6
         id: git_diff_content
         with: