<?xml version="1.0" encoding="UTF-8"?> 
<rss version="2.0"
        xmlns:content="http://purl.org/rss/1.0/modules/content/"
        xmlns:wfw="http://wellformedweb.org/CommentAPI/"
        xmlns:dc="http://purl.org/dc/elements/1.1/"
        xmlns:atom="http://www.w3.org/2005/Atom"
        xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
        xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
        >
<channel>
  <title>asgaard</title>
  <description></description>
  <link>https://blog.asgaard.co.uk/2014/4</link>
  <lastBuildDate>Tue, 12 May 26 17:28:57 +0000</lastBuildDate>
  <language>en</language>
  <count>2</count>
  <offset>0</offset>
      <item>
    <title>Serialization in web apps/JavaScript</title>
    <link>https://blog.asgaard.co.uk/2014/04/28/serialization-in-web-apps</link>
    <pubDate>Mon, 28 Apr 14 17:48:41 +0000</pubDate>
    <guid>https://blog.asgaard.co.uk/2014/04/28/serialization-in-web-apps</guid>
    <description><![CDATA[
<p>
Serialization is a problem that pops up in a persistent software. In web apps, it announces itself as a problem at requirements just beyond CRUD. You suddenly have all these data models that need to be represented in local memory at runtime and also need to be saved persistently in some form.<h2>JSON</h2>
<p>
Serialization is actually a laborious problem that touches pretty much your entire codebase. 
<p>
You might think &quot;but JavaScript objects are basically JSON, right?&quot; Well, yes, they are. &quot;And JSON is just a serialization format, right?&quot;  Well, yes... but it&#039;s easily possible to overestimate how much it does for you (which is fine, and is not a valid criticism of JSON). 
<p>
The limitations are encountered very early on: JSON won&#039;t help you deserialize a date, and although it will help you serialize <code>var steve = new Person(&#039;Steve&#039;)</code> into <code>{&#039;name&#039;: &#039;Steve&#039;}</code>, it won&#039;t help deserialize that into something that fulfils <code>person.getName() =&gt; &#039;Steve&#039;</code>.<h2>Type</h2>[...]]]></description>
    <content:encoded><![CDATA[
<p>
Serialization is a problem that pops up in a persistent software. In web apps, it announces itself as a problem at requirements just beyond CRUD. You suddenly have all these data models that need to be represented in local memory at runtime and also need to be saved persistently in some form.<h2>JSON</h2>
<p>
Serialization is actually a laborious problem that touches pretty much your entire codebase. 
<p>
You might think &quot;but JavaScript objects are basically JSON, right?&quot; Well, yes, they are. &quot;And JSON is just a serialization format, right?&quot;  Well, yes... but it&#039;s easily possible to overestimate how much it does for you (which is fine, and is not a valid criticism of JSON). 
<p>
The limitations are encountered very early on: JSON won&#039;t help you deserialize a date, and although it will help you serialize <code>var steve = new Person(&#039;Steve&#039;)</code> into <code>{&#039;name&#039;: &#039;Steve&#039;}</code>, it won&#039;t help deserialize that into something that fulfils <code>person.getName() =&gt; &#039;Steve&#039;</code>.<h2>Types</h2>
<p>
The way I&#039;ve handled this in the past is to have a central serialization system which handles wrapping things up into JSON and unwrapping it into your application&#039;s data structures, by augmenting JSON representations with type data, and delegating down to data-structure specific implementations of serialize and deserialize.
<p>
I&#039;m not talking about the <code>JSON.stringify()</code> call here, which is trivial, I&#039;m talking about representing my model in a way that is JSON-compatible and also includes all information necessary to deserialize automatically.
<p>
For example, JSON has no date type, so we have to represent a date as a JSON primitive and do some magic either side of the serialization to convert it to a primitive and to turn it back into a JavaScript date. A serializer for date might just convert the date into a time string, like:
<p>
<pre>function serializeDate(date) {
    return date.toString();
}</pre>
<p>
This will be invoked as part of a larger routine, which uses the above function to populate the &#039;value&#039; field, and itself populates the <code>$type</code> field:
<p>
<pre>{
    &quot;$type&quot;: &quot;date&quot;, 
    &quot;value&quot;: &quot;Mon Apr 28 2014 01:00:00 GMT+0100 (BST)&quot;
}</pre>
<p>
A deserializer for this might look like:
<p>
<pre>function deserializeDate(value, cb) {
    cb(null, new Date(value));
}</pre>
<p>
which again exists as a part of a larger routine which analyses the <code>$type</code> field and finds the deserializeDate to pass the value field into.
<p>
Deserializers are best made asynchronous by default because sooner or later you&#039;ll want to represent references to objects that are stored remotely, which require a HTTP request to retrieve, and it&#039;s hard to go from synchronous to asynchronous.<h2>deserialize()</h2>
<p>
To tie everything together, you have a top level <code>deserialize(data, cb)</code> function. It&#039;s useful if this function is fairly intelligent and handles anything you throw at it. For convenience, I&#039;ve added the ability to recursively deserialize arbitrary JS maps, because later on it&#039;s useful to just throw a block of data at this function and get a block back, instead of having to queue up a lot of calls. I have not implemented the same case for Array in an attempt to keep the code brief, but you should consider doing so.
<p>
In the code below I&#039;ve used <a href='https://github.com/caolan/async'>Async</a> and <a href='http://underscorejs.org/'>Underscore</a>.
<p>
<pre>var typeMap = {
    'date' : deserializeDate
};

function deserialize(data, cb) {

    // JSON primitives are handled easily
    var primitives = {
        'string' : true,
        'boolean': true,
        'number': true
    }
    if (primitives[typeof data] || data == null) {
        // This is a primitive - we can just return it.
        cb(null, data);
        return;
    }

    var typeField = data['$type'];
    var deserializer = typeField &amp;&amp; typeMap[typeField];
    if (typeof deserializer === 'function') { 
        // This is an object conforming to our ($type, value) structure.
        deserializer(data['value'], cb);
        return;
    }

    // Handle arbitrary JS objects by recursively deserializing its contents.
    else if (data.constructor.name === ({}).constructor.name) {
        var ret = {};
        async.each(_.keys(data), function(key, cb) {
            deserialize(data[key], function(err, value) {
                if (err)  { log('Some error deserializing ', data[key]); }
                else { ret[key] = value; }
                cb();
            });
        }, function(err) {
            cb(null, ret);
        });
        return;
    }
}</pre>
<p>
I&#039;ve omitted the serialization code, but it&#039;s very much the same idea. You have a top level serializer that generates JSON-compatible objects by delegating to data-type specific serializers.<h2>Deserializing your own data structures</h2>
<p>
That was pretty easy. The harder parts come when you consider your own objects. 
<br>
Let&#039;s say you have a Person class. It&#039;s useful to have a way to merge serialized data into an existing Person (because this allows us to re-purpose our serialization code for handling live-update events), and it&#039;s also useful to have a way to create a new Person. Luckily, the second is just a trivial special case of the first.
<p>
<pre>Person.fromSerialized = function(obj, cb) {
    var p = new Person();
    p.mergeFromSerialized(obj, cb);
}

// Add this to the typeMap, so it becomes visible to deserialize()
typeMap['Person'] = Person.fromSerialized

Person.prototype.mergeFromSerialized = function(obj, cb) {

    // The correct way is to use deserialize(). This looks recursive, but 
    // the subtlety is that obj is not a typed object - it's just a plain 
    // JS map of properties. Because of this, the deserializer won't 
    // try to delegate and will just deserialize each member

    function take(object, key, defaultValue) {
        if (object.hasOwnProperty(key)) { 
            return object[key]; 
        }
        else { 
            return defaultValue;
        }
    }

    deserialize(obj, _.bind(function(data) {
        this.name = take(data, 'name', this.name);
        this.dob = take(data, 'dob', this.dob);
        cb(null, this);
    }, this));
}</pre>
<p>
take() is a helper function which returns the given key from an object unless that key doesn&#039;t exist, in which case it returns a default. It avoids a lot of if (data.hasOwnProperty()) {} blocks and makes the code a bit more legible.<h2>What about references?</h2>
<p>
We still very quickly encounter yet another case: that objects need shared references. If a Person object has a friends array, the JSON doesn&#039;t want to embed the friends in that array, it wants to just store a reference. JSON has no reference support so you need to encode your own.
<p>
In this case you need a way to refer to the object itself, not its contents.
<p>
The way you handle this is to ID persistent objects and serialize them as special <code>&#039;$ref&#039;</code> objects, and expose a method on your server to return a specific object with the given ID.
<p>
<pre>{
    &quot;$type&quot;: &quot;Person&quot;,
    &quot;value&quot;: {
         &quot;name&quot;: &quot;Steve&quot;,
         &quot;friends&quot;: [
             { 
                 &quot;$ref&quot; : {
                     &quot;$collection&quot;: &quot;People&quot;,
                     &quot;id&quot;: 3
                 }
             }
         ]
     }
}</pre>
<p>
This presents an interesting point: that you need two different serialized representations for objects that may exist in collections. One returns a normal keyed object representing the object&#039;s state, the other returns a <code>$ref</code> object. I&#039;ve found the latter case to be the generally useful one, and the former to be a special case which the caller should only invoke purposefully. Meaning: <code>serialize(myPerson) =&gt; {$ref: ... }</code>, and <code>myPerson.serialize() =&gt; {name: ..., dob: ..., friends: ... }</code>.
<p>
The deserializer for a $ref looks something like:
<p>
<pre>var collections = {};
function deserializeRef(data, cb) {
    
    var collectionName = data['$collection'],
        id = data['id'];
    if (!collections[collectionName]) {
        collections[collectionName] = {};
    }
    var collection = collections[collectionName];
    if (collection[id]) { cb(null, collection[id]); }
    else {
        yourServerApi.getCollectionElement(collectionName, id, 
                                           function(err, response) {
            if (err) { cb(err); return }
            deserialize(response, function(err, response) {
                if (err) { cb(err); return }
                collection[id] = response;
                cb(err, collection[id]);
            });
        });
    }
}</pre><h2>Minification concerns</h2>
<p>
The <code>mergeFromSerialized</code> code is still a bit irritating that we have to manually write out a line for each property.
<p>
It&#039;s tempting to rewrite it to something like this:
<p>
<pre>Person.prototype.mergeFromSerialized = function(obj, cb) {
    var myFields = ['name', 'dob']; 
    deserialize(obj, _.bind(function(data) {
        _.each(myFields, function(field) {
           this[field] = take(data, field, this[field]);
           cb(null, this);
        }, this);
    }, this));
}</pre>
<p>
Unfortunately there&#039;s a glaring problem with this code: You&#039;ve just trashed static analysis. If you are using a compiler which aggressively renames class members, your deserialization will fail, because your code will write to <code>this[&#039;dob&#039;]</code>, which your compiler has renamed to <code>this.a</code>.  
<p>
Using such an aggressive minification might not be of great importance for a smaller project, but if your compiled JS file is measuring several megabytes, it&#039;s useful to be able to be able to trim the source.
<p>
I am not really sure what the answer to this is, other than auto-generating the long form serialization source code.]]></content:encoded>
  </item>
      <item>
    <title>Angular vs jQuery</title>
    <link>https://blog.asgaard.co.uk/2014/04/03/angular-vs-jquery</link>
    <pubDate>Thu, 03 Apr 14 20:57:08 +0000</pubDate>
    <guid>https://blog.asgaard.co.uk/2014/04/03/angular-vs-jquery</guid>
    <description><![CDATA[
<p>
On HN today there is an inflammatory article called <a href='https://news.ycombinator.com/item?id=7522520'>&#039;The Reason Angular JS will fail&#039;</a>, which tl;drs to &quot;it&#039;s too complicated compared to jQuery&quot;.
<p>
It&#039;s not a well argued opinion, but it is one I&#039;m inclined to agree with.
<p>
The comments all jump on the fact it is badly argued and state that comparing Angular&#039;s complexity to jQuery&#039;s simplicity is apples to oranges because jQuery isn&#039;t suitable for building large apps, whereas Angular is.
<p>
Yeah, well, no. I don&#039;t agree there.
<p>
I&#039;m curious how many of those people have actually written a large JS app, instead of just imagined what it might involve. I&#039;ve been working on what is currently a 70kloc (and growing) JS app for the last 18 months. I have used JS frameworks before, but we don&#039;t use any on this project. I think, in general, frameworks have properties that make certain kinds of projects easier to put together, and none of them is closely related to the size of the project.
<p>
We briefly l[...]]]></description>
    <content:encoded><![CDATA[
<p>
On HN today there is an inflammatory article called <a href='https://news.ycombinator.com/item?id=7522520'>&#039;The Reason Angular JS will fail&#039;</a>, which tl;drs to &quot;it&#039;s too complicated compared to jQuery&quot;.
<p>
It&#039;s not a well argued opinion, but it is one I&#039;m inclined to agree with.
<p>
The comments all jump on the fact it is badly argued and state that comparing Angular&#039;s complexity to jQuery&#039;s simplicity is apples to oranges because jQuery isn&#039;t suitable for building large apps, whereas Angular is.
<p>
Yeah, well, no. I don&#039;t agree there.
<p>
I&#039;m curious how many of those people have actually written a large JS app, instead of just imagined what it might involve. I&#039;ve been working on what is currently a 70kloc (and growing) JS app for the last 18 months. I have used JS frameworks before, but we don&#039;t use any on this project. I think, in general, frameworks have properties that make certain kinds of projects easier to put together, and none of them is closely related to the size of the project.
<p>
We briefly looked at Angular but it seemed like it was causing us problems instead of solving them so it didn&#039;t survive long (being overly complex didn&#039;t help its case).
<p>
The statement that Angular is for big apps and jQuery is for small apps seems like a false dichotomy to me. 
<p>
We use jQuery for manipulating the DOM because without two way data binding, you&#039;d be crazy not to. Data binding is a big attraction of Angular (and others). 
<p>
I like data binding from a theoretical perspective, but in practice, generating and syncing DOM really is not really a hard or notable problem. It&#039;s just a relatively minor implementation issue. I don&#039;t think data binding <em>really</em> provides an advantage in terms of time or effort or maintainability - it seems like it should, but you revise your opinion somewhat after spending many hours debugging data binding templates. Processes relying on other people&#039;s magic are hard to debug.
<p>
Data binding is largely a distraction from things that really do take time and energy, which tend not to be problems touched on by frameworks anyway. Things like &quot;how do I make all these subtly inconsistent but nevertheless intuitive requirements sort of co-exist&quot;. Software is a lot about bridging the gap between reality and someone else&#039;s vision of how reality should be. That stuff is hard. That&#039;s the risk to a big project. Keeping your GUI and data models in sync is not.
<p>
But saying that our app is built with jQuery instead of Angular is sort of missing the point. We just use jQuery at the view level and it works very well. But that&#039;s all it does. We don&#039;t use the DOM to store state, which, I think, is the assertion usually levelled at &#039;jQuery apps&#039;. It&#039;s just a tool for generating DOM a bit easier.
<p>
What I am saying is that I am sceptical of Angular being useful for anything more than a CRUD app on steroids, but I am not sceptical at all of jQuery being useful in virtually any HTML/JS project. This is because jQuery (roughly) adopts the philosophy of &quot;do one thing and do it well&quot;, which means it&#039;s a very good fit for all the use cases it&#039;s aiming at. The same isn&#039;t true of Angular.]]></content:encoded>
  </item>
  </channel>
</rss>