Filtered by Linux

Page 2

Reset

How I added brotli_static to nginx 1.17 in Ubuntu (Eoan Ermine) 19.10

April 9, 2020
0 comments Nginx, Linux

I knew I didn't want to download the sources to nginx to install it on my new Ubuntu 19.10 server because I'll never have the discipline to remember to keep it upgraded. No, I'd rather just run apt update && apt upgrade every now and then.

Why is this so hard?! All I need is the ability to set brotli_static on; in my Nginx config so it'll automatically pick the .br file if it exists on disk.

These instructions totally helped but here they are specifically for my version (all run as root):

git clone --recursive https://github.com/google/ngx_brotli.git

apt install brotli
apt-get build-dep nginx

# Note the version of which nginx you have installed
nginx -v
# ...which informs which URL to wget
wget https://nginx.org/download/nginx-1.17.9.tar.gz
aunpack nginx-1.17.9.tar.gz
nginx -V 2>&1 >/dev/null | grep -o " --.*" | grep -oP .+?(?=--add-dynamic-module)| head -1 > nginx-1.17.9/build_args.txt
cd nginx-1.17.9/
./configure --with-compat $(cat build_args.txt) --add-dynamic-module=../ngx_brotli
make install

cp objs/ngx_http_brotli_filter_module.so  /usr/lib/nginx/modules/
chmod 644 /usr/lib/nginx/modules/ngx_http_brotli_filter_module.so
cp objs/ngx_http_brotli_static_module.so /usr/lib/nginx/modules/
chmod 644 /usr/lib/nginx/modules/ngx_http_brotli_static_module.so

ls -l /etc/nginx/modules

Now I can edit my /etc/nginx/nginx.conf (somewhere near the top) to:

load_module /usr/lib/nginx/modules/ngx_http_brotli_filter_module.so;
load_module /usr/lib/nginx/modules/ngx_http_brotli_static_module.so;

And test that it works:

nginx -t

How to install Node 12 on Ubuntu (Eoan Ermine) 19.10

April 8, 2020
0 comments Node, Linux

I'm setting up a new Ubuntu (Eoan Ermine) 19.10 server and I noticed that apt install nodejs gives you Node v10 which is an LTS (Long Term Support) version that'll last till April 2021. However, I want Node v12 which is the most recent LTS release as of April 2020.

To install it I used these instructions:

curl -sL https://deb.nodesource.com/setup_12.x | sudo -E bash -
sudo apt-get install -y nodejs

That worked great.
When it finished, it spat out this nice little blurb about how to install yarn:

...
Fetched 7454 B in 1s (12.3 kB/s)
Reading package lists... Done

## Run `sudo apt-get install -y nodejs` to install Node.js 12.x and npm
## You may also need development tools to build native addons:
     sudo apt-get install gcc g++ make
## To install the Yarn package manager, run:
     curl -sL https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add -
     echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list
     sudo apt-get update && sudo apt-get install yarn

By the way, I have no idea what nodejs-mozilla but running apt show nodejs-mozilla yields:

Package: nodejs-mozilla
Version: 12.16.1-0ubuntu0.19.10.1
Priority: optional
Section: universe/javascript
Origin: Ubuntu
Maintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com>
Bugs: https://bugs.launchpad.net/ubuntu/+filebug
Installed-Size: 42.0 MB
Depends: libc6 (>= 2.29), libgcc1 (>= 1:3.4), libstdc++6 (>= 9)
Homepage: http://nodejs.org/
Download-Size: 10.4 MB
APT-Sources: http://mirrors.digitalocean.com/ubuntu eoan-updates/universe amd64 Packages
Description: evented I/O for V8 javascript
 Node.js is a platform built on Chrome's JavaScript runtime for easily
 building fast, scalable network applications. Node.js uses an
 event-driven, non-blocking I/O model that makes it lightweight and
 efficient, perfect for data-intensive real-time applications that run
 across distributed devices.
 .
 Node.js is bundled with several useful libraries to handle server
 tasks:
 .
 System, Events, Standard I/O, Modules, Timers, Child Processes, POSIX,
 HTTP, Multipart Parsing, TCP, DNS, Assert, Path, URL, Query Strings.

Installing it doesn't add a node executable and I can't find a home page for it. apt can be weird sometimes.

uwsgi weirdness with --http

September 19, 2019
2 comments Python, Linux

Instead of upgrading everything on my server, I'm just starting from scratch. From Ubuntu 16.04 to Ubuntu 19.04 and I also upgraded everything else in sight. One of them was uwsgi. I copied various user config files but for uwsgi things didn't very well. On the old server I had uwsgi version 2.0.12-debian and on the new one 2.0.18-debian. The uWSGI changelog is pretty hard to read but I sure don't see any mention of this.

You see, on SongSearch I have it so that Nginx talks to Django via a uWSGI socket. But the NodeJS server talks to Django via 127.0.0.1:PORT. So I need my uWSGI config to start both. Here was the old config:

[uwsgi]
plugins = python35
virtualenv = /var/lib/django/songsearch/venv
pythonpath = /var/lib/django/songsearch
user = django
uid = django
master = true
processes = 3
enable-threads = true
touch-reload = /var/lib/django/songsearch/uwsgi-reload.touch
http = 127.0.0.1:9090
module = songsearch.wsgi:application
env = LANG=en_US.utf8
env = LC_ALL=en_US.UTF-8
env = LC_LANG=en_US.UTF-8

(The only difference on the new server was the python37 plugin instead)

I start it and everything looks fine. No errors in the log files. And netstat looks like this:

# netstat -ntpl | grep 9090
tcp        0      0 127.0.0.1:9090          0.0.0.0:*               LISTEN      1855/uwsgi

But every time I try to curl localhost:9090 I kept getting curl: (52) Empty reply from server. Nothing in the log files! It seemed no matter what I tried I just couldn't talk to it over HTTP. No, I'm not a sysadmin. I'm just a hobbyist trying to stand up my little server with the tools and limited techniques I know but I was stumped.

The solution

After endless Googling for a resolution and trying all sorts of uwsgi commands directly, I somehow stumbled on the solution.


[uwsgi]
plugins = python35
virtualenv = /var/lib/django/songsearch/venv
pythonpath = /var/lib/django/songsearch
user = django
uid = django
master = true
processes = 3
enable-threads = true
touch-reload = /var/lib/django/songsearch/uwsgi-reload.touch
-http = 127.0.0.1:9090
+http-socket = 127.0.0.1:9090
module = songsearch.wsgi:application
env = LANG=en_US.utf8
env = LC_ALL=en_US.UTF-8
env = LC_LANG=en_US.UTF-8

With this one subtle change, I can now curl localhost:9090 and I still have the /var/run/uwsgi/app/songsearch/socket socket. So, yay!

I'm blogging about this in case someone else ever gets stuck in the same nasty surprise as me.

Also, I have to admit, I was fuming with rage from this frustration. It's really inspired me to revive the quest for an alternative to uwsgi because I'm not sure it's that great anymore. There are new alternatives such as gunicorn, gunicorn with Meinheld, bjoern etc.

Experimenting with Nginx worker_processes

February 14, 2019
0 comments Web development, Nginx, macOS, Linux

I have Nginx 1.15.8 installed with Homebrew on my macOS. By default the /usr/local/etc/nginx/nginx.conf it set to...:

worker_processes  1;

But, from the documentation, it says:

"The optimal value depends on many factors including (but not limited to) the number of CPU cores, the number of hard disk drives that store data, and load pattern. When one is in doubt, setting it to the number of available CPU cores would be a good start (the value “auto” will try to autodetect it)." (bold emphasis mine)

What is the ideal number for me? The performance of Nginx on my laptop doesn't really matter. But for my side-projects it's important to have a fast Nginx since it serves static HTML and lots of static assets. However, on my personal servers I have a bunch of other resource hungry stuff going on that I know is more likely to need the resources, like Elasticsearch and uwsgi.

To figure this out, I wrote a benchmark program that requested a small index.html about 10,000 times across 10 concurrent clients with hey.

hey -n 10000 -c 10 http://peterbecom.local/plog/variable_cache_control/awspa

I ran this 10 times between changing the worker_processes in the nginx.conf file. Here's the output:

1 WORKER PROCESSES
BEST  : 13,607.24 reqs/s

2 WORKER PROCESSES
BEST  : 17,422.76 reqs/s

3 WORKER PROCESSES
BEST  : 18,886.60 reqs/s

4 WORKER PROCESSES
BEST  : 19,417.35 reqs/s

5 WORKER PROCESSES
BEST  : 19,094.18 reqs/s

6 WORKER PROCESSES
BEST  : 19,855.32 reqs/s

7 WORKER PROCESSES
BEST  : 19,824.86 reqs/s

8 WORKER PROCESSES
BEST  : 20,118.25 reqs/s

Or, as a graph:

Graph

Now note, this is done here on my MacBook Pro. Not on my Ubuntu DigitalOcean servers. For now, I just want to get a feeling for how these numbers correlate.

Conclusion

The benchmark isn't good enough. The numbers are pretty stable but I'm doing this on my laptop with multiple browsers idling, Slack, and Spotify running. Clearly, the throughput goes up a bit when you allocate more workers but if anything can be learned from this, start with going beyond 1 for a quick fix and from there start poking and more exhaustive benchmarks. And don't forget, if you have time to go deeper on this, to look at the combination of worker_connections and worker_processes.

How to encrypt a file with Emacs on macOS (ccrypt)

January 29, 2019
0 comments macOS, Linux

Suppose you have a cleartext file that you want to encrypt with a password, here's how you do that with ccrypt on macOS. First:


▶ brew install ccrypt

Now, you have the ccrypt program. Let's test it:

cat secrets.txt
Garage pin: 123456
Favorite kid: bart
Wedding ring order no: 98c4de910X

▶ ccrypt secrets.txt
Enter encryption key: ▉▉▉▉▉▉▉▉▉▉▉
Enter encryption key: (repeat) ▉▉▉▉▉▉▉▉▉▉▉

# Note that the original 'secrets.txt' is replaced 
# with the '.cpt' version.ls | grep secrets
secrets.txt.cpt

▶ less secrets.txt.cpt
"secrets.txt.cpt" may be a binary file.  See it anyway?

There. Now you can back up that file on Dropbox or whatever and not have to worry about anybody being able to open it without your password. To read it again:


▶ ccrypt --decrypt --cat secrets.txt.cpt
Enter decryption key: ▉▉▉▉▉▉▉▉▉▉▉
Garage pin: 123456
Favorite kid: bart
Wedding ring order no: 98c4de910X

▶ ls | grep secrets
secrets.txt.cpt

Or, to edit it you can do these steps:


▶ ccrypt --decrypt secrets.txt.cpt
Enter decryption key: ▉▉▉▉▉▉▉▉▉▉▉


▶ vi secrets.txt

▶ ccrypt secrets.txt
Enter encryption key:
Enter encryption key: (repeat)

Clunky that you have you extract the file and remember to encrypt it back again. That's where you can use emacs. Assuming you have emacs already installed and you have a ~/.emacs file. Add these lines to your ~/.emacs:


(setq auto-mode-alist
 (append '(("\\.cpt$" . sensitive-mode))
               auto-mode-alist))
(add-hook 'sensitive-mode (lambda () (auto-save-mode nil)))
(setq load-path (cons "/usr/local/share/emacs/site-lisp/ccrypt" load-path))
(require 'ps-ccrypt "ps-ccrypt.el")

By the way, how did I know that the load path should be /usr/local/share/emacs/site-lisp/ccrypt? I looked at the output from brew:


▶ brew info ccrypt
ccrypt: stable 1.11 (bottled)
Encrypt and decrypt files and streams
...
==> Caveats
Emacs Lisp files have been installed to:
  /usr/local/share/emacs/site-lisp/ccrypt
...

Anyway, now I can use emacs to open the secrets.txt.cpt file and it will automatically handle the password stuff:

About to open
About to open

Opening
Opening with password

Opened
Opened

This is really convenient. Now you can open an encrypted file, type in your password, and it will take care of encrypting it for you when you're done (saving the file).

Be warned! I'm not an expert at either emacs or encryption so just be careful and if you get nervous take precaution and set aside more time to study this deeper.

elapsed function in bash to print how long things take

December 12, 2018
0 comments macOS, Linux

I needed this for a project and it has served me pretty well. Let's jump right into it:


# This is elapsed.sh

SECONDS=0

function elapsed()
{
  local T=$SECONDS
  local D=$((T/60/60/24))
  local H=$((T/60/60%24))
  local M=$((T/60%60))
  local S=$((T%60))
  (( $D > 0 )) && printf '%d days ' $D
  (( $H > 0 )) && printf '%d hours ' $H
  (( $M > 0 )) && printf '%d minutes ' $M
  (( $D > 0 || $H > 0 || $M > 0 )) && printf 'and '
  printf '%d seconds\n' $S
}

And here's how you use it:


# Assume elapsed.sh to be in the current working directory
source elapsed.sh

echo "Doing some stuff..."
# Imagine it does something slow that
# takes about 3 seconds to complete.
sleep 3
elapsed

echo "Some quick stuff..."
sleep 1
elapsed

echo "Doing some slow stuff..."
sleep 61
elapsed

The output of running that is:

Doing some stuff...
3 seconds
Some quick stuff...
4 seconds
Doing some slow stuff...
1 minutes and 5 seconds

Basically, if you have a bash script that does a bunch of slow things, it having a like of elapsed there after some blocks of code will print out how long the script has been running.

It's not beautiful but it works.

hashin 0.14.0 with --update-all and a bunch of other features

November 13, 2018
0 comments Python, Linux

If you don't know it is, hashin is a Python command line tool for updating your requirements file's packages and their hashes for use with pip install. It takes the pain out of figuring out what hashes each package on PyPI has. It also takes the pain out of figuring out what version you can upgrade to.

In the 0.14.0 release (changelog) there are a bunch of new features. The most exciting one is --update-all. Let's go through some of the new features:

Update all (--update-all)

Suppose you want to bravely upgrade all the pinned packages to the latest and greatest. Before version 0.14.0 you'd have to manually open the requirements file and list every single package on the command line:


$ less requirements.txt
$ hashin Django requests Flask cryptography black nltk numpy

With --update-all it's the same thing except it does that reading and copy-n-paste for you:


$ hashin --update-all

Particularly nifty is to combine this with --dry-run if you get nervous about that many changes.

Interactive update all (--interactive)

This new flag only makes sense when used together with --update-all. Used together, it basically reads all packages in the requirements file, and for each one that there is a new version it asks you if you want to update it or skip it:

It looks like this:


$ hashin --update-all --interactive
PACKAGE                        YOUR VERSION    NEW VERSION
Django                         2.1.2           2.1.3           ✓
requests                       2.20.0          2.20.1          ✘
numpy                          1.15.2          1.15.4          ?
Update? [Y/n/a/q/?]:

You can also use the aliases hashin -u -i to do the same thing.

Support for "extras"

If you want to have requests[security] or markus[datadog] in your requirements file, hashin used to not support that. This now works:


$ hashin "requests[security]"

Before, it would look for a package called verbatim requests[security] on PyPI which obviously doesn't exist. Now, it parses that syntax, makes a lookup for requests and when it's done it puts the extra syntax back into the requirements file.

Thanks Dustin Ingram for pushing for this one!

Atomic writes

Prior to this version, if you typed hashin requests flask numpy nltkay it would go ahead and do one of those packages at a time and effectively open and edit the requirements file as many times as there are packages mentioned. The crux of that is that if you, for example, have a typo (e.g. nltkay instead of nltk) it would crash there and not roll back any of the other writes. It's not a huge harm but it certainly is counter intuitive.

Another place where this matters is with --dry-run. If you specified something like hashin --dry-run requests flask numpy you would get one diff per package and thus repeat the diff header 3 (excessive) times.

The other reason why atomic writes is important is if you use hashin --update-all --interactive and it asks you if you want to update package1, package2, package3, and then you decide "Nah. I don't want any of this. I quit!" it would just do that without updating the requirements file.

Better not-found errors

This was never a problem if you used Python 2.7 but for Python 3.x, if you typoed a package name you'd get a Python exception about the HTTP call and it wasn't obvious that the mistake lies with your input and not the network. Basically, it traps any HTTP errors and if it's 404 it's handled gracefully.

(Internal) Black everything and pytest everything

All source code is now formatted with Black which, albeit imperfect, kills any boring manual review of code style nits. And, it uses therapist to wrap the black checks and fixes.

And all unit tests are now written for pytest. pytest was already the tool used in TravisCI but now all of those self.assertEqual(foo, bar)s have been replaced with assert foo == bar.

The best grep tool in the world; ripgrep

June 19, 2018
3 comments Linux, Web development, macOS

tl;dr; ripgrep (aka. rg) is the best tool to grep today.

ripgrep is a tool for searching files. Its killer feature is that it's fast. Like, really really fast. Faster than sift, git grep, ack, regular grep etc.

If you don't believe me, either read this detailed blog post from its author or just jump straight to the conclusion:

  • For both searching single files and huge directories of files, no other tool obviously stands above ripgrep in either performance or correctness.

  • ripgrep is the only tool with proper Unicode support that doesn’t make you pay dearly for it.

  • Tools that search many files at once are generally slower if they use memory maps, not faster.

Benchmark
Benchmark

I used to use git grep whenever I was inside a git repo and sift for everything else. That alone, was a huge step up from regular grep. Granted, almost all my git repos are small enough that regular git grep is faster than I can perceive many times. But with ripgrep I can just add --no-ignore-vcs and it searches in all the files mentioned in .gitignore too. That's useful when you want to search in your own source as well as the files in node_modules.

The installation instructions are easy. I installed it with brew install ripgrep and the best way to learn how to use it is rg --help. Remember that it has a lot of cool features that are well worth learning. It's written in Rust and so far I haven't had a single crash, ever. The ability to search by file type gets some getting used to (tip! use: rg --type-list) and remember that you can pipe rg output to another rg. For example, to search for all lines that contain query and string you can use rg query | rg string.

How to unset aliases set by Oh My Zsh

June 14, 2018
5 comments Linux, macOS

I use Oh My Zsh and I highly recommend it. However, it sets some aliases that I don't want. In particular, there's a plugin called git.plugin.zsh (located in ~/.oh-my-zsh/plugins/git/git.plugin.zsh) that interfers with a global binary I have in $PATH. So when I start a shell the executable gg becomes...:

which gg
gg: aliased to git gui citool

That overrides /usr/local/bin/gg which is the one I want to execute when I type gg. To unset that I can run...:

unset gg

▶ which gg
/usr/local/bin/gg

To override it "permanently", I added, to the end of ~/.zshrc:


# This unsets ~/.oh-my-zsh/plugins/git/git.plugin.zsh
# So my /usr/local/bin/gg works instead
unalias gg

Now whenever I start a new terminal, it defaults to the gg in /usr/local/bin/gg instead.

How to NOT start two servers on the same port

June 11, 2018
2 comments Linux, Web development

First of all, you can't start two servers on the same port. Ultimately it will fail. However, you might not want a late notice of this. For example, if you do this:


# In one terminal
$ cd elasticsearch-6.1.0
$ ./bin/elasticsearch
...
$ curl localhost:9200
...
"version" : {
    "number" : "6.1.0",
...

# In *another* terminal
$ cd elasticsearch-6.2.4
$ ./bin/elasticsearch
...
$ curl localhost:9200
...
"version" : {
    "number" : "6.1.0",
...

In other words, what happened to the elasticsearch-6.2.4/bin/elasticsearch?? It actually started on port :9201. But that's a rather scary thing because as you jump between project in different tabs or you might not notice that you have Elasticsearch running with docker-compose somewhere.

To remedy this I use this curl one-liner:


$ curl -s localhost:9200 > /dev/null && echo "Already running!" && exit || ./bin/elasticsearch

Now if you try to start a server on a used port it will exit early.

To wrap this up in a script, take this:


#!/bin/bash

set -eo pipefail

hostandport=$1
shift
curl -s "$hostandport" >/dev/null && \
  echo "Already running on $hostandport" && \
  exit 1 || exec "$@"

...and make it an executable called unlessalready.sh and now you can do this:


$ unlessalready.sh localhost:9200 ./bin/elasticsearch