Filtered by JavaScript

Page 15

Reset

AJAX or not

December 22, 2014
1 comment Web development, AngularJS, JavaScript

From the It-Depends-on-What-You're-Building department.

As a web developer you have a job:

  1. Display a certain amount of database data on the screen
  2. Do it as fast as possible

The first point is these days easily taken care of with the likes of Django or Rails which makes it über easy to write queries that you then use in templates to generate the HTML and voila you have a web page.

The second point is taken care of with a myriad of techniques. It's almost a paradox. The fastest way to render something on the screen is to generate everything on the server and send it wholesome. It means the browser can very quickly (and boosted by GPU) render something on the screen. But if you have a lot of data that needs to be displayed it's often better to send just a little bit of HTML and then let some Javascript kick in and take care of extracting the rest of the information using AJAX.

Here I have prepared three different versions of ways to display a bunch of information on the screen:

https://www.peterbe.com/ajaxornot/

Visual comparison on WebPagetest
What you should note and take away from this little experimental playground:

  1. All server-side work is done in Django but it's served straight out of memcache so it should be fast server-side.

  2. The content is NOT important. It's just a list of blog posts and their categories and keywords.

  3. To make it somewhat realistic, each version needs to 1) display a JPG and 2) have a Javascript onclick event that throws a confirm() dialog box.

  4. The AngularJS version loads significantly slower but it's not because AngularJS is slow, but because it's able to do so much more later. Loading a Javascript framework is like an investment. Big cost upfront and small cost later when you need more magic to happen without having a complete server refresh.

  5. View 1, 2 and 3 are all three imperfect versions but they illustrate the three major groups of solving the problem stated at the top of this blog post. The other views are attempts of optimizations.

  6. Clearly the "visually fastest" version is the optimization version 5 which is a fork of version 2 which loads, on the server-side, everything that is above the fold and then take care of the content below the fold with AJAX.
    See this visual comparison

  7. Optimization version 4 was a silly optimization. It depends on the fact that JSON is more "compact" than HTML. When you Gzip the content, the difference in size doesn't matter anymore. However, it's an interesting technique because it means you can do all business logic rendering stuff in one language without having to depend on AJAX.

  8. Open the various versions in your browser and try to "feel" how pages the load. Ask your inner gutteral heart which version you prefer; do you prefer a completely blank screen and a browser loading spinner or do you prefer to see some skeleton structure first whilst waiting for the bulk content comes in?

  9. See this as a basis of thoughts and demonstration. Remember the very first sentence in this blog post.

One-way data bindings in AngularJS 1.3

December 11, 2014
1 comment AngularJS, JavaScript

You might have heard that AngularJS 1.3 has "one-time bindings" which is that you can print the value of a scope variable with {{ ::somevar }} and that this is really good for performance because it means that once rendered it doesn't add to the list of things that the angular app needs to keep worrying about. I.e. it's one less thing to watch.

But what's a good use case of this? This is a good example.

Because ng-if="true" will cause the DOM element to be re-created it will go back to the scope variable and re-evaluate it.

To then() or to success() in AngularJS

November 27, 2014
19 comments AngularJS, JavaScript

By writing this I'm taking a risk of looking like an idiot who has failed to read the docs. So please be gentle.

AngularJS uses a promise module called $q. It originates from this beast of a project.

You use it like this for example:


angular.module('myapp')
.controller('MainCtrl', function($scope, $q) {
  $scope.name = 'Hello ';
  var wait = function() {
    var deferred = $q.defer();
    setTimeout(function() {
      // Reject 3 out of 10 times to simulate 
      // some business logic.
      if (Math.random() > 0.7) deferred.reject('hell');
      else deferred.resolve('world');
    }, 1000);
    return deferred.promise;
  };

  wait()
  .then(function(rest) {
    $scope.name += rest;
  })
  .catch(function(fallback) {
    $scope.name += fallback.toUpperCase() + '!!';
  });
});

Basically you construct a deferred object and return its promise. Then you can expect the .then and .catch to be called back if all goes well (or not).

There are other ways you can use it too but let's stick to the basics to drive home this point to come.

Then there's the $http module. It's where you do all your AJAX stuff and it's really powerful. However, it uses an abstraction of $q and because it is an abstraction it renames what it calls back. Instead of .then and .catch it's .success and .error and the arguments you get are different. Both expose a catch-all function called .finally. You can, if you want to, bypass this abstraction and do what the abstraction does yourself. So instead of:


$http.get('https://api.github.com/users/peterbe/gists')
.success(function(data) {
  $scope.gists = data;
})
.error(function(data, status) {
  console.error('Repos error', status, data);
})
.finally(function() {
  console.log("finally finished repos");
});

...you can do this yourself...:


$http.get('https://api.github.com/users/peterbe/gists')
.then(function(response) {
  $scope.gists = response.data;
})
.catch(function(response) {
  console.error('Gists error', response.status, response.data);
})
.finally(function() {
  console.log("finally finished gists");
});

It's like it's built specifically for doing HTTP stuff. The $q modules doesn't know that the response body, the HTTP status code and the HTTP headers are important.

However, there's a big caveat. You might not always know you're doing AJAX stuff. You might be using a service from somewhere and you don't care how it gets its data. You just want it to deliver some data. For example, suppose you have an AJAX request cached so that only the first time it needs to do an HTTP GET but all consecutive times you can use the stuff already in memory. E.g. Something like this:


angular.module('myapp')
.controller('MainCtrl', function($scope, $q, $http, $timeout) {

  $scope.name = 'Hello ';
  var getName = function() {
    var name = null;
    var deferred = $q.defer();
    if (name !== null) deferred.resolve(name);
    $http.get('https://api.github.com/users/peterbe')
    .success(function(data) {
      deferred.resolve(data.name);
    }).error(deferred.reject);
    return deferred.promise;
  };

  // Even though we're calling this 3 different times
  // you'll notice it only starts one AJAX request.
  $timeout(function() {
    getName().then(function(name) {
      $scope.name = "Hello " + name;
    });    
  }, 1000);

  $timeout(function() {
    getName().then(function(name) {
      $scope.name = "Hello " + name;
    });    
  }, 2000);

  $timeout(function() {
    getName().then(function(name) {
      $scope.name = "Hello " + name;
    });    
  }, 3000);
});

And with all the other promise frameworks laying around like jQuery's you will sooner or later forget if it's success() or then() or done() and your goldfish memory (like mine) will cause confusion and bugs.

So is there a way to make $http.<somemethod> return a $q like promise but with the benefit of the abstractions that the $http layer adds?

Here's one such possible solution maybe:


var app = angular.module('myapp');

app.factory('httpq', function($http, $q) {
  return {
    get: function() {
      var deferred = $q.defer();
      $http.get.apply(null, arguments)
      .success(deferred.resolve)
      .error(deferred.resolve);
      return deferred.promise;
    }
  }
});

app.controller('MainCtrl', function($scope, httpq) {

  httpq.get('https://api.github.com/users/peterbe/gists')
  .then(function(data) {
    $scope.gists = data;
  })
  .catch(function(data, status) {
    console.error('Gists error', response.status, response.data);
  })
  .finally(function() {
    console.log("finally finished gists");
  });
});

That way you get the benefit of a one same way for all things that get you data some way or another and you get the nice AJAXy signatures you like.

This is just a prototype and clearly it's not generic to work with any of the shortcut functions in $http like .post(), .put() etc. That can maybe be solved with a Proxy object or some other hack I haven't had time to think of yet.

So, what do you think? Am I splitting hairs or is this something attractive?

A "perma search" in AngularJS

November 18, 2014
0 comments AngularJS, JavaScript

A common thing in many (AngularJS) apps is to have an ng-model input whose content is used to as a filter on an ng-repeat somewhere within the page. Something like this:


<input ng-model="search">
<div ng-repeat="item in items | filter:search">...

Well, what if you want the search you make to automatically become part of the URL so that if you bookmark the search or copy the URL to someone else, the search is still there? It would be really practical. Granted, it's not always that you want this but that's something you can decide.

AngularJS 1.2 (I think) introduced the ability to set reloadOnSearch: false on a route provider and that means that you can do things like $location.hash('something') without it triggering the route provider to re-map the URL and re-start the revelant controller.

So here's a good example of (ab)using that to do a search filter which automatically updates the URL.

Check out the demo: https://www.peterbe.com/permasearch/index.html

This works in HTML5 mode too if you're wondering.

Suppose you use many more things in your filter function other than just a free text ng-modal. Like this:


<input type="text" ng-model="filters.search">
<select ng-model="filters.year">
<option value="">All</option>
<option value="2014">2014</option>
<option value="2013">2013</option>
</select>

You might have some checkboxes and stuff too. All you need to do then is to encode that information in the hash. Something like this might be a good start:


$scope.filters = {};
$scope.$watchCollection('filters', function(value) {
    $location.hash($.param(value)); // a jQuery function
});

And something like this to "unparse" the params.

localForage vs. XHR

October 22, 2014
9 comments JavaScript

tl;dr; Fetching from IndexedDB is about 5-15 times faster than fetching from AJAX.

localForage is a wrapper for the browser that makes it easy to work with any local storage in the browser. Different browsers have different implementations. By default, when you use localForage in Firefox is that it used IndexedDB which is asynchronous by default meaning your script don't get blocked whilst waiting for data to be retrieved.

A good pattern for a "fat client" (lots of javascript, server primarly speaks JSON) is to download some data, by AJAX using JSON and then store that in the browser. Next time you load the page, you first read from the local storage in the browser whilst you wait for a fresh new JSON from the server. That way you can present data to the screen sooner. (This is how Buggy works, blogged about it here)

Another similar pattern is that you load everything by AJAX from the server, present it and store it in the local storage. Then you perdiocally (or just on onload) you send the most recent timestamp from the data you've received and the server gives you back everything new and everything that has changed by that timestamp. The advantage of this is that the payload is continuously small but the server has to make a custom response for each client whereas a big fat blob of JSON can be better cached and such. However, oftentimes the data is dependent on your credentials/cookie anyway so most possibly you can't do much caching.

Anyway, whichever pattern you attempt I thought it'd be interesting to get a feel for how much faster it is to retrieve from the browsers memory compared to doing a plain old AJAX GET request. After all, browsers have seriously optimized for AJAX requests these days so basically the only thing standing in your way is network latency.

So I wrote a little comparison script that tests this. It's here: https://www.peterbe.com/localforage-vs-xhr/index.html

It retrieves a 225Kb JSON blob from the server and measures how long that took to become an object. Equally it does the same with localforage.getItem and then it runs this 10 times and compares the times. It's obviously not a surprise the local storage retrieval is faster, what's interesting is the difference in general.

What do you think? I'm sure both sides can be optimized but at this level it feels quite realistic scenarios.

An AngularJS directive with itself as the attribute

September 3, 2014
8 comments JavaScript, AngularJS

Because this took me quite a while to figure out, I thought I'd share in case somebody else is falling into the same pit of confusion.

When you write an attribute directive in angularjs you might want to have it fed by an attribute value.
For example, something like this:


<div my-attribute="somevalue"></div>

How then do you create a new scope that takes that in? It's not obvious. Any here's how you do it:


app.directive('myAttribute', function() {
    return {
        restrict: 'A',
        scope: {
            myAttribute: '='
        },
        template: '<div style="font-weight:bold">{{ myAttribute | number:2 }}</div>'
    };
});

The trick to notice is that the "self attribute" because the name of the attribute in camel case.

Thanks @mythmon for helping me figure this out.

Aggressively prefetching everything you might click

August 20, 2014
13 comments This site, Web development, JavaScript

I just rolled out a change here on my personal blog which I hope will make my few visitors happy.

**Basically; when you hover over a link (local link) long enough it prefetches it (with AJAX) so that if you do click it's hopefully already cached in your browser. **

If you hover over a link and almost instantly hover out it cancels the prefetching. The assumption here is that if you deliberately put your mouse cursor over a link and proceed to click on it you want to go there. Because your hand is relatively slow I'm using the opportunity to prefetch it even before you have clicked. Some hands are quicker than others so it's not going to help for the really quick clickers.

What I also had to do was set a Cache-Control header of 1 hour on every page so that the browser can learn to cache it.

The effect is that when you do finally click the link, by the time your browser loads it and changes the rendered output it'll hopefully be able to do render it from its cache and thus it becomes visually ready faster.

Let's try to demonstrate this with this horrible animated gif:
(or download the screencast.mov file)

Screencast
1. Hover over a link (in this case the "Now I have a Gmail account" from 2004)
2. Notice how the Network panel preloads it
3. Click it after a slight human delay
4. Notice that when the clicked page is loaded, its served from the browser cache
5. Profit!

So the code that does is is quite simply:


$(function() {
  var prefetched = [];
  var prefetch_timer = null;
  $('div.navbar, div.content').on('mouseover', 'a', function(e) {
    var value = e.target.attributes.href.value;
    if (value.indexOf('/') === 0) {
      if (prefetched.indexOf(value) === -1) {
        if (prefetch_timer) {
          clearTimeout(prefetch_timer);
        }
        prefetch_timer = setTimeout(function() {
          $.get(value, function() {
            // necessary for $.ajax to start the request :(
          });
          prefetched.push(value);
        }, 200);
      }
    }
  }).on('mouseout', 'a', function(e) {
    if (prefetch_timer) {
      clearTimeout(prefetch_timer);
    }
  });
});

Also, available on GitHub.

I'm excited about this change because of a couple of reasons:

  1. On mobile, where you might be on a non-wifi data connection you don't want this. There you don't have the mouse event onmouseover triggering. So people on such devices don't "suffer" from this optimization.
  2. It only downloads the HTML which is quite light compared to static assets such as pictures but it warms up the server-side cache if needs be.
  3. It's much more targetted than a general prefetch meta header.
  4. Most likely content will appear rendered to your eyes faster.

Gzip rules the world of optimization, often

August 9, 2014
4 comments Python, JavaScript

So I have a massive chunk of JSON that a Django view is sending to a piece of Angular that displays it nicely on the page. It's big. 674Kb actually. And it's likely going to be bigger in the near future. It's basically a list of dicts. It looks something like this:


>>> pprint(d['events'][0])
{u'archive_time': None,
 u'archive_url': u'/manage/events/archive/1113/',
 u'channels': [u'Main'],
 u'duplicate_url': u'/manage/events/duplicate/1113/',
 u'id': 1113,
 u'is_upcoming': True,
 u'location': u'Cyberspace - Pacific Time',
 u'modified': u'2014-08-06T22:04:11.727733+00:00',
 u'privacy': u'public',
 u'privacy_display': u'Public',
 u'slug': u'bugzilla-development-meeting-20141115',
 u'start_time': u'15 Nov 2014 02:00PM',
 u'start_time_iso': u'2014-11-15T14:00:00-08:00',
 u'status': u'scheduled',
 u'status_display': u'Scheduled',
 u'thumbnail': {u'height': 32,
                u'url': u'/media/cache/e7/1a/e71a58099a0b4cf1621ef3a9fe5ba121.png',
                u'width': 32},
 u'title': u'Bugzilla Development Meeting'}

So I thought one hackish simplification would be to convert each of these dicts into an list with a known sort order. Something like this:


>>> event = d['events'][0]
>>> pprint([event[k] for k in sorted(event)])
[None,
 u'/manage/events/archive/1113/',
 [u'Main'],
 u'/manage/events/duplicate/1113/',
 1113,
 True,
 u'Cyberspace - Pacific Time',
 u'2014-08-06T22:04:11.727733+00:00',
 u'public',
 u'Public',
 u'bugzilla-development-meeting-20141115',
 u'15 Nov 2014 02:00PM',
 u'2014-11-15T14:00:00-08:00',
 u'scheduled',
 u'Scheduled',
 {u'height': 32,
  u'url': u'/media/cache/e7/1a/e71a58099a0b4cf1621ef3a9fe5ba121.png',
  u'width': 32},
 u'Bugzilla Development Meeting']
 

So I converted my sample events.json file like that:

$ l -h events*
-rw-r--r--  1 peterbe  wheel   674K Aug  8 14:08 events.json
-rw-r--r--  1 peterbe  wheel   423K Aug  8 15:06 events.optimized.json

Excitingly the file is now 250Kb smaller because it no longer contains all those keys.

Now, I'd also send the order of the keys so I could do something like this in the AngularJS code:


 .success(function(response) {
   events = []
   response.events.forEach(function(event) {
     var new_event = {}
     response.keys.forEach(function(key, i) {
       new_event[k] = event[i]
     })
   })
 })
 

Yuck! Nested loops! It was just getting more and more complicated.
Also, if there are keys that are not present in every element, it means I'd have to replace them with None.

At this point I stopped and I could smell the hackish stink of sulfur of the hole I was digging myself into.
Then it occurred to me, gzip is really good at compressing repeated things which is something we have plenty of in a document store type data structure that a list of dicts is.

So I packed them manually to see what we could get:

$ apack events.json.gz events.json
$ apack events.optimized.json.gz events.optimized.json

And without further ado...

$ l -h events*
-rw-r--r--  1 peterbe  wheel   674K Aug  8 14:08 events.json
-rw-r--r--  1 peterbe  wheel    90K Aug  8 14:20 events.json.gz
-rw-r--r--  1 peterbe  wheel   423K Aug  8 15:06 events.optimized.json
-rw-r--r--  1 peterbe  wheel    81K Aug  8 15:07 events.optimized.json.gz

Basically, all that complicated and slow hoopla for saving 10Kb. No thank you.

Thank you gzip for existing!

Optimizing MozTrap

June 4, 2014
3 comments Web development, JavaScript, Mozilla

MozTrap is what's called a "test case management system". Basically, software QA people need a structure and pattern to their testing. What to test, what versions to test on and what hardware/operatting system etc all is part of a "test suite". That's what MozTrap manages.

So this project was built by Mozilla's automation and tools team. It is currently not an actively developed project. Not because it's not needed or used but because it basically maps all the features we need. A large part of the code base was originally written by a personal friend of mine who I respect wholeheartedly; Carl Meyer of Django/pip/virtualenv/etc fame. I'm grateful for the awesome documentation he left behind amongst many other things.

Together with the team we sat down and listed all the biggest pain points as of today. Basically, the number one thing is speed. Pages load too slowly. Normally when web developers worry themselves with web performance it's a matter of shaving milliseconds off a page where a clients perception equals lost or gained profits. Here's not a problem of milliseconds but a problem of seconds. After some quick poking around on the production site and looking at some code the conclusion is simple: The site is so darn slow because the HTML sent from the server is way too MASSIVE. And baked into that is a mixture of the poor web server having to produce a massive HTML blob and it being sent over the wire.

One test run I made said it took 14 seconds to render a certain page.

Creator drop-down

Why is it so slow?

So how did this happen and why is it not Carl's fault? :) The reason it happened was because of the underestimated number of options added to the advanced filtering drop-downs. On a local dev version you never notice these things because you set up some options, for example tags, and the drop-down never gets larger than 10-20 options. For example, the "Creator" drop-down today has 1,664 different choices.

If you take all those choices and turn thing into a HTML like this: <option value="1">Adam</option>\n<option value="2">Bram</option>... etc. you get 66Kb of just HTML. However, MozTrap doesn't work like that. Instead it uses pretty drop-downs that don't look like regular HTML drop-downs. See for yourself; go to https://moztrap.mozilla.org/results/runs/ and click the "Advanced Filtering" button.
So, that means that the HTML for each option instead looks like this:


<li class="filter-item">
  <input name="filter-creator" data-name="creator" value="1" id="id-filter-creator-1" class="check" type="checkbox">
  <span class="onoff">
    <label for="id-filter-creator-1" class="onoffswitch">Adam</label>
                <span class="pinswitch"></span>
    <span class="content" title="creator: Adam">Adam</span>
  </span>
</li>

Now you get 620Kb of just HTML just for the "Creators" field. Granted, that is the biggest field of all the drop-downs but lots of them are massive.

So, this makes the page weigh a total of about 1.1Mb just for the HTML. Not only is it a lot of work for the (Django) server to generate this but it's also a heck of a lot of data to send across the Internet on every page request.

So what was the solution?

An ideal solution would have been a significant re-write whereby much of the values of the page gets rewritten as later AJAX calls. I.e. load a skeleton that loads superfast, and then load some AJAX in the background. That AJAX could potentially be cached in the browser with localStorage or something so that you get something to show very quickly whilst you wait for the AJAX request to complete. But this would have been too big a change and the way the filtering works on these pages, you actually need all the options in the drop-downs on immediate load because of the way "pinned filters" work.

So the solution was to replace all the repeated HTML chunks with 1 JSON string and then a piece of Javascript template rendering. So, in the Django template code instead of:


{% for field in filters %}
  {% include "lists/_filter_group.html" with advanced=1 prefix="filter" pinable=1 %}
{% endfor %}

We now replace this with:


<script>
var FILTERS = {% filterset_to_json filters with advanced=1 prefix="filter" pinable=1 %}
</script>
<script id="filter_group" type="text/html">
<section class="filter-group {{ field.cls }}" data-name="{{ field.key }}">
  <h5 class="category-title">
    {{ _field_name_lower }}
    {{# field.switchable }}
    ...

What that basically is is some Mustache code that I use to generate the HTML DOM nodes and insert into the page after load.

In conclusion

So basically nothing changes. Nothing of the Django view had to change. Visually there's no difference and the same actual user data is sent from the server to the client but just packed in a more optimal way.

There are multiple pages where these massive "Advanced Filtering" options exist but on one page I measured the whole page went from weighing 1.1Mb down to 132Kb.

HTML Tree on Hacker News

May 18, 2014
0 comments Web development, JavaScript

On Friday I did a Show HN and got featured on the front page for HTML Tree.

Google Analytics
Amazingly, out of the 3,858 visitors (according to Google Analytics today) 2,034 URLs were submitted and tested on the app. Clearly a lot of people just clicked the example submission but out of those 1,634 were unique. Granted, some people submitted more than one URL but I think a large majority of people came up with a URL of their own to try. Isn't that amazing! What a turnout of a Friday afternoon hack (with some Sunday night hacking to make it into a decent looking website).

The lesson to learn here is that the Hacker News crowd is excellent for getting engagement. Yes, there are a lot of blather and almost repetitive submissions but by and large it's a very engaging community. Suck on that those who make fun of HN!

UPDATE

HTMLTree has been moved to https://htmltree.herokuapp.com/