<?xml version="1.0" encoding="UTF-8"?> 
<rss version="2.0"
        xmlns:content="http://purl.org/rss/1.0/modules/content/"
        xmlns:wfw="http://wellformedweb.org/CommentAPI/"
        xmlns:dc="http://purl.org/dc/elements/1.1/"
        xmlns:atom="http://www.w3.org/2005/Atom"
        xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
        xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
        >
<channel>
  <title>asgaard</title>
  <description>Javascript</description>
  <link>https://blog.asgaard.co.uk/t/javascript</link>
  <lastBuildDate>Wed, 22 Apr 26 11:55:23 +0000</lastBuildDate>
  <language>en</language>
  <count>19</count>
  <offset>0</offset>
      <item>
    <title>West Midlands trains delay repay script</title>
    <link>https://blog.asgaard.co.uk/2020/01/17/west-midlands-trains-delay-repay-script</link>
    <pubDate>Fri, 17 Jan 20 19:41:13 +0000</pubDate>
    <guid>https://blog.asgaard.co.uk/2020/01/17/west-midlands-trains-delay-repay-script</guid>
    <description><![CDATA[
<p>
Anyone who has been through the Delay Repay system with a season ticket for West Midlands trains knows that the web site is not built with the user&#039;s convenience in mind. The site makes it very difficult to enter things like start and end dates that aren&#039;t close to the current date, and it does not remember details across multiple visits.
<p>
As such I have a simple block of JavaScript I paste into the console when going through the process to greatly speed it up. When you reach the page to enter your season ticket details, open the developer tools console in your browser and paste in the following (amended with your values, of course):
<p>
<pre>var validFrom = '15/1/2020';
var validTo = '14/1/2021';
var startStation = 'BIRMINGHAM MR ST';
var endStation = 'OLTON';
var ticketCost = '66.50';
var swiftCardNumber = '631591011781573301'

$('label[for=ticketduration7]').click();
$('label[for=5e07f3f9-bfcb-11e7-b959-0689534b0694]').click();
$('label[for=typeticket2]').click();
$('label[for=ticketPayType4]</pre>[...]]]></description>
    <content:encoded><![CDATA[
<p>
Anyone who has been through the Delay Repay system with a season ticket for West Midlands trains knows that the web site is not built with the user&#039;s convenience in mind. The site makes it very difficult to enter things like start and end dates that aren&#039;t close to the current date, and it does not remember details across multiple visits.
<p>
As such I have a simple block of JavaScript I paste into the console when going through the process to greatly speed it up. When you reach the page to enter your season ticket details, open the developer tools console in your browser and paste in the following (amended with your values, of course):
<p>
<pre>var validFrom = '15/1/2020';
var validTo = '14/1/2021';
var startStation = 'BIRMINGHAM MR ST';
var endStation = 'OLTON';
var ticketCost = '66.50';
var swiftCardNumber = '631591011781573301'

$('label[for=ticketduration7]').click();
$('label[for=5e07f3f9-bfcb-11e7-b959-0689534b0694]').click();
$('label[for=typeticket2]').click();
$('label[for=ticketPayType4]').click();

document.getElementById('ValidFrom').value = validFrom;
document.getElementById('ValidTo').value = validTo;

document.getElementById('startstation').value = startStation;
document.getElementById('endstation').value = endStation;

document.getElementById('costofticket').value = ticketCost;
document.getElementById('swiftcardNumber').value = swiftCardNumber;

$('label[for=classofticket-1]').click();</pre>]]></content:encoded>
  </item>
      <item>
    <title>npm is terrible</title>
    <link>https://blog.asgaard.co.uk/2017/08/02/npm-is-terrible</link>
    <pubDate>Wed, 02 Aug 17 08:49:46 +0000</pubDate>
    <guid>https://blog.asgaard.co.uk/2017/08/02/npm-is-terrible</guid>
    <description><![CDATA[
<p>
It&#039;s hard for me to believe in the quality of a piece of software that behaves like this:<pre>
$ npm install [package]
  npm <span style='color: #800000'>ERR!</span> Please try running this command again as root/Administrator. 
$ npm install [package]
  (OK)
</pre>
<p>
I&#039;m sure about five years ago npm was a useful and reliable piece of software. Perhaps I remember incorrectly.
<p>
I had another tale of npm woe last week when after runing `npm install`, my node_modules/package-name was a symlink to none other than node_modules/package-name. For some reason, downgrading npm to version 3 (from version 5) made it all work properly.
<p>
I groaned today when I had to install yarn, yet another package manager, to install something else, but the sooner something sane replaces npm the better.
<p>
[...]]]></description>
    <content:encoded><![CDATA[
<p>
It&#039;s hard for me to believe in the quality of a piece of software that behaves like this:<pre>
$ npm install [package]
  npm <span style='color: #800000'>ERR!</span> Please try running this command again as root/Administrator. 
$ npm install [package]
  (OK)
</pre>
<p>
I&#039;m sure about five years ago npm was a useful and reliable piece of software. Perhaps I remember incorrectly.
<p>
I had another tale of npm woe last week when after runing `npm install`, my node_modules/package-name was a symlink to none other than node_modules/package-name. For some reason, downgrading npm to version 3 (from version 5) made it all work properly.
<p>
I groaned today when I had to install yarn, yet another package manager, to install something else, but the sooner something sane replaces npm the better.
<p>
]]></content:encoded>
  </item>
      <item>
    <title>Ionic JavaScript thoughts</title>
    <link>https://blog.asgaard.co.uk/2016/02/21/ionic-javascript-thoughts</link>
    <pubDate>Sun, 21 Feb 16 21:48:58 +0000</pubDate>
    <guid>https://blog.asgaard.co.uk/2016/02/21/ionic-javascript-thoughts</guid>
    <description><![CDATA[
<p>
First, <a rel='external' href='http://ionicframework.com/'>Ionic</a> is a JavaScript framework for building mobile apps. It&#039;s built on Cordova (the free version of Phonegap) and AngularJS and gives you a set of tools for compiling your code into a native app with a webview. It comes with some AngularJS services and directives for emulating mobile functionality as well as handling effects like screen sliding transitions.
<p>
In general I was surprised by just how native the running product feels. You have to do some work to keep it that way, of course, but for a simple forms based app it&#039;s very achievable to end up with something that looks and feels native. Savvy users will spot the difference, but most probably won&#039;t. The CSS graphics performance on a modern iOS device (say iPhone 5+) is very good and the animation effects are smooth. On Android (5) things are a bit choppier, but still acceptable.
<p>
The other major positive is that it&#039;s much more convenient to develop things locally in a browser, and use all the debugging tools that come with it[...]]]></description>
    <content:encoded><![CDATA[
<p>
First, <a rel='external' href='http://ionicframework.com/'>Ionic</a> is a JavaScript framework for building mobile apps. It&#039;s built on Cordova (the free version of Phonegap) and AngularJS and gives you a set of tools for compiling your code into a native app with a webview. It comes with some AngularJS services and directives for emulating mobile functionality as well as handling effects like screen sliding transitions.
<p>
In general I was surprised by just how native the running product feels. You have to do some work to keep it that way, of course, but for a simple forms based app it&#039;s very achievable to end up with something that looks and feels native. Savvy users will spot the difference, but most probably won&#039;t. The CSS graphics performance on a modern iOS device (say iPhone 5+) is very good and the animation effects are smooth. On Android (5) things are a bit choppier, but still acceptable.
<p>
The other major positive is that it&#039;s much more convenient to develop things locally in a browser, and use all the debugging tools that come with it, than to go through the much longer debug cycle of deploying native code and trying to use an unfriendly debugger that frequently just dumps you to a page of assembly code. I definitely find HTML, CSS and JS to be a faster and more pleasant development environment than Objective C and Xcode&#039;s UI designer (I can&#039;t speak for native Android development).
<p>
There are also downsides to Ionic:
<p>
Native functionality is provided through plugins. These are community made and the quality is wildly variable. It is easy to end up digging through the source of these yourself. On iOS, these will occasionally conflict with each other due to bad namespacing by the author. Most plugins will support iOS and Android, but support for other mobile OSes gets patchy quickly.
<p>
Although it&#039;s write-once-run-anywhere, you do end up spending a lot of time fiddling with OS specific CSS bugs. iOS&#039;s HTML renderer is far from perfect, but Android&#039;s is pretty good!
<p>
The build process is a bit annoying and we&#039;ve found it hard to lock everything into our version control system due to some things about the way ionic is structured, especially in regard to mixing Linux (which can&#039;t build for iOS) and Mac machines in our project. We&#039;ve also had numerous things break unexpectedly which later turn out to be due to some detail of the build process.
<p>
Ionic&#039;s JavaScript libraries are of variable quality. I recommend simply not using $ionicPopup at all and instead writing your own popup library (or finding another one), because the one built into Ionic has some catastrophic bugs like the fact that displaying two popups at once will render the first unresponsive, which means the user has to restart the app. There&#039;s also a timing issue with rendering a popup during a screen transition which will cause the popup to not have any buttons (so again, the user has to restart the app because they can&#039;t close it). The developers do not appear responsive in fixing these issues. Other services have smaller bugs which are less problematic, but still don&#039;t inspire a huge amount of confidence.
<p>
Overall: pretty good, but not without problems.]]></content:encoded>
  </item>
      <item>
    <title>When should I use a JavaScript compiler, and which one should I use?</title>
    <link>https://blog.asgaard.co.uk/2014/12/20/when-should-i-use-a-javascript-compiler-and-which-one-should-i-use</link>
    <pubDate>Sat, 20 Dec 14 13:40:47 +0000</pubDate>
    <guid>https://blog.asgaard.co.uk/2014/12/20/when-should-i-use-a-javascript-compiler-and-which-one-should-i-use</guid>
    <description><![CDATA[
<p>
There are a few different options for Compile-to-JS languages and here I&#039;m going to outline the main contenders as of late 2014.<h2>Plain JavaScript</h2>
<p>
Let&#039;s start with the basics. A lot of people who do not understand plain JS berate it for many things, but in reality its main practical drawback is that it becomes unwieldy on larger projects (like any dynamic language), and in such cases, having a checking compiler will cut down on a lot of unit testing. Plain JavaScript is <em>fine</em> for small projects, it becomes problematic as your code grows; the point at which that happens depends on the skill and discipline of your developers.
<p>
Regardless of what you choose, you&#039;re going to be working indirectly with JS and you should learn it to a fluent level.<h2>CoffeeScript</h2>
<p>
<a href='http://coffeescript.org/' target='_blank'>CoffeeScript</a> (CS) was very popular when it was released but popularity has since waned. Its only feature is syntax, which implies that there is something seriously wrong with JavaScript&#039;s syntax (subjective), but fixes it by making it com[...]]]></description>
    <content:encoded><![CDATA[
<p>
There are a few different options for Compile-to-JS languages and here I&#039;m going to outline the main contenders as of late 2014.<h2>Plain JavaScript</h2>
<p>
Let&#039;s start with the basics. A lot of people who do not understand plain JS berate it for many things, but in reality its main practical drawback is that it becomes unwieldy on larger projects (like any dynamic language), and in such cases, having a checking compiler will cut down on a lot of unit testing. Plain JavaScript is <em>fine</em> for small projects, it becomes problematic as your code grows; the point at which that happens depends on the skill and discipline of your developers.
<p>
Regardless of what you choose, you&#039;re going to be working indirectly with JS and you should learn it to a fluent level.<h2>CoffeeScript</h2>
<p>
<a href='http://coffeescript.org/' target='_blank'>CoffeeScript</a> (CS) was very popular when it was released but popularity has since waned. Its only feature is syntax, which implies that there is something seriously wrong with JavaScript&#039;s syntax (subjective), but fixes it by making it completely different (Ruby-ish) instead of identifying exact problems and improving them. It is now quite surprising to come across CoffeeScript code, which suggests that people have generally rejected CS as an improvement (an opinion I agree with). It does not really help with the fundamental problem of code management, because it&#039;s just another dynamic language. Avoid.<h2>Dart</h2>
<p>
(Google&#039;s) <a href='https://www.dartlang.org/' target='_blank'>Dart</a> is a strange one because although Dart code can compile to JavaScript, it&#039;s really designed to be run on a virtual machine, which only Chrome implements. For this reason, it is not worth evaluating further. Avoid unless you have a particular reason not to.<h2>Google Closure Compiler</h2>
<p>
<a href='https://developers.google.com/closure/compiler/' target='_blank'>Closure</a> is a checking compiler from Google, and it is interesting in that it works on pure JS code, i.e. it&#039;s not a separate programming language. Code written for Closure needs comment type annotations (which are quite ugly and cumbersome, but easily written if you are disciplined) to make use of the compiler&#039;s features. The compiler does useful things like checking that the methods you are calling really exist, instead of leaving you to find out at runtime.
<p>
The main downside of Closure is performance; there is no incremental compilation which means if you change one line of one file you have to recompile the whole thing, and it can take several minutes. I should stress that for development this makes little difference - you tend to develop using your raw code, not the compiled code, so invoking the compiler is something you do before committing code, not (always) before trying it out locally. To make full use of Closure all external libraries need to have type definitions; for popular libraries these are easy to find and it is not too hard to write these yourself (but it is an extra requirement).
<p>
Google&#039;s investment in AtScript (see below) may have implications for the future of Closure, but the compiler itself is mature and reliable as it is.<h2>TypeScript</h2>
<p>
<a href='http://www.typescriptlang.org/' target='_blank'>TypeScript</a> (TS) has been around for a few years now and only recently seems to be finding its niche, although growth is still slow. TS is a superset of JS and adds some useful syntax, but its real value is in allowing you to write strongly typed and checked code, similarly to Closure, but its standardisation allows IDEs to give you things like code completion. If your average foray into JavaScript is in writing a 50 line jQuery plugin, TypeScript is completely overkill for you. On the other hand, if you are working on a JS project with multiple developers and hundreds of files, TypeScript is <em>very useful</em>, although it is only recently that the tooling has become mature enough to handle this scale.
<p>
The tooling  (via Visual Studio) is (now) both good and stable, and the compilation step in development is fast and largely transparent. If you are using an MS environment and have access to Visual Studio, you should absolutely strongly consider using TypeScript. MS claims that tooling exists for non Windows based platforms but I cannot speak to its quality (and frankly, I&#039;d be sceptical).
<p>
Note that TS requires type definitions for external code, like Closure.
<br>
 
<br>
TS is a bit of a risk in that growth appears slow and MS might see that as a reason to discontinue it.<h2>AtScript</h2>
<p>
<a href='https://docs.google.com/document/d/11YUzC-1d0V1-Q3V0fQ7KSit97HnZoKVygDxpWzEYW0U/edit' target='_blank'>AtScript</a> is superficially not dissimilar to TypeScript, but at the moment the project is too immature to seriously evaluate how it practically differs, and because of its immaturity it would be unwise to start using AtScript any time soon.<h2>Summary</h2>
<p>
In summary, your choice for &quot;which compiler do I use?&quot; depends on project size, project length, and number of developers. I&#039;ll condense this all down to lines of code to get the point across. In order of preference:<table class='width-100'><thead><th>Project Size</th><th>Approx #lines of code</th><th>Choices</th></thead><tbody><tr><td>Tiny</td><td>1k</td><td>Plain JS</td></tr><tr><td>Small</td><td>10k</td><td>TS, Closure, Plain JS</td></tr><tr><td>Medium to very large</td><td>&gt;10k</td><td>TS, Closure</td></tr></tbody></table>]]></content:encoded>
  </item>
      <item>
    <title>It is too easy to introduce bugs into AngularJS apps by using third party libraries</title>
    <link>https://blog.asgaard.co.uk/2014/10/15/it-is-too-easy-to-introduce-bugs-into-angularjs-apps-by-using-third-party-libraries</link>
    <pubDate>Wed, 15 Oct 14 20:39:53 +0000</pubDate>
    <guid>https://blog.asgaard.co.uk/2014/10/15/it-is-too-easy-to-introduce-bugs-into-angularjs-apps-by-using-third-party-libraries</guid>
    <description><![CDATA[
<p>
I like Angular, in fact I like it a lot more than I ever expected I would. But one of the issues that nobody ever tells you about is how easy it is to introduce very very subtle bugs simply by using third party libraries. The basic problem is that Angular enforces certain constraints upon you which may or may not be respected by third party libraries. 
<p>
A good example, which I&#039;m sure must happen quite frequently, is a library that moves DOM around without telling you. My specific case with this class of bug was a pop-up dialog which I had templated inside an Angular scope in my DOM, which my dialog library (provided by KendoUI) then moved out to a different area of the DOM.
<p>
It starts off thus:
<p>
<pre>html
    body
        div [angular scope]
            div#dialog [dialog with ng-bindings]</pre>
<p>
and then after the dialog is shown, ends up:
<p>
<pre>html
    body
        div [angular scope]
        div#dialog [dialog with ng-bindings]</pre>
<p>
When the scope is first instantiated everything works as usual. The bindings inside th[...]]]></description>
    <content:encoded><![CDATA[
<p>
I like Angular, in fact I like it a lot more than I ever expected I would. But one of the issues that nobody ever tells you about is how easy it is to introduce very very subtle bugs simply by using third party libraries. The basic problem is that Angular enforces certain constraints upon you which may or may not be respected by third party libraries. 
<p>
A good example, which I&#039;m sure must happen quite frequently, is a library that moves DOM around without telling you. My specific case with this class of bug was a pop-up dialog which I had templated inside an Angular scope in my DOM, which my dialog library (provided by KendoUI) then moved out to a different area of the DOM.
<p>
It starts off thus:
<p>
<pre>html
    body
        div [angular scope]
            div#dialog [dialog with ng-bindings]</pre>
<p>
and then after the dialog is shown, ends up:
<p>
<pre>html
    body
        div [angular scope]
        div#dialog [dialog with ng-bindings]</pre>
<p>
When the scope is first instantiated everything works as usual. The bindings inside the dialog are evaluated correctly. But when the scope is destroyed, because the dialog has been moved, it doesn&#039;t get torn down with the rest of the template and lingers in the DOM, bindings and ng-* event handlers intact. 
<p>
When the scope is instantiated for a second time, the dialog and its ID already exist and is outside of any scope telling it it should update. Therefore what happens here is the dialog is still bound to the original instance of scope, which is defunct. When trying to interact with this dialog, it will behave fairly normally, except all the events it invokes will fire on a defunct scope. Your dialog, which is created by clicking items on your current screen, is actually performing actions on something from a previous screen!
<p>
There is an easy fix to this; if your scope uses dialogs, you make sure you remove them from the DOM when <code>$scope.on(&#039;$destroy&#039;)</code> fires.
<p>
Debugging this class of error is extremely hard. Angular makes stepping through live code a bit difficult at the best of times, and the situation you eventually end up with is 
<p>
1. $scope.variable === &#039;expected value&#039;
<br>
&lt;disconnected async code&gt;
<br>
2. $scope.variable === &#039;unexpected value&#039;
<p>
The key is to recognise that $scope.variable does not change, instead it is $scope that changes. I find it helpful to add a &#039;destroyed&#039; flag to destroyed scopes so if they show up in the debugger it&#039;s pretty obvious things have gone awry.
<p>
]]></content:encoded>
  </item>
      <item>
    <title>TypeScript and Closure</title>
    <link>https://blog.asgaard.co.uk/2014/10/05/typescript-and-closure</link>
    <pubDate>Sun, 05 Oct 14 16:42:07 +0000</pubDate>
    <guid>https://blog.asgaard.co.uk/2014/10/05/typescript-and-closure</guid>
    <description><![CDATA[
<p>
For the past few years I&#039;ve worked a lot on large JavaScript projects. JS is a very dynamic language and that means it is very difficult to maintain a reliable code base for multiple developers over long periods of time without strong tooling.
<p>
For most of that time, we&#039;ve used Google&#039;s Closure Compiler. Closure (the compiler) is extremely good and very useful. Although its main way of selling itself is to describe itself as an advanced minifier, its real value is in its strong understanding of the JS you give it and its ability to perform a lot of static checking. Closure (the library) is not something I would recommend, but the compiler is solid.
<p>
The only real downside of Closure is that it requires adding ugly annotations to your code, e.g.
<p>
<pre>/** 
  * @param {string} s
  * @return {number}
  */
function toNumber(s) { return parseInt(s, 10); }</pre>
<p>
I think that the fact it requires all these annotations to become useful is the main reason that Closure has been largely overlooked by JS developer[...]]]></description>
    <content:encoded><![CDATA[
<p>
For the past few years I&#039;ve worked a lot on large JavaScript projects. JS is a very dynamic language and that means it is very difficult to maintain a reliable code base for multiple developers over long periods of time without strong tooling.
<p>
For most of that time, we&#039;ve used Google&#039;s Closure Compiler. Closure (the compiler) is extremely good and very useful. Although its main way of selling itself is to describe itself as an advanced minifier, its real value is in its strong understanding of the JS you give it and its ability to perform a lot of static checking. Closure (the library) is not something I would recommend, but the compiler is solid.
<p>
The only real downside of Closure is that it requires adding ugly annotations to your code, e.g.
<p>
<pre>/** 
  * @param {string} s
  * @return {number}
  */
function toNumber(s) { return parseInt(s, 10); }</pre>
<p>
I think that the fact it requires all these annotations to become useful is the main reason that Closure has been largely overlooked by JS developers (i.e. &quot;too much effort&quot;). But I would argue that requiring type documentation is not really a bad thing if you want maintainable code.
<p>
The flip side of this is that Closure code is <em>just JavaScript</em>. You don&#039;t even need to invoke the compiler to test your modifications - just fire up your browser and away you go (well, unless dependencies need regenerating, but that&#039;s actually quite rare). Assuming you are using the compiler as a safety check, it&#039;s usually only necessary to use it before you&#039;re ready to commit your code.<h2>TypeScript</h2>
<p>
Next onto the scene is TypeScript. TypeScript is a compile-to-JS language by Microsoft that fulfils the same niche as Closure. It adds type information into the language syntax and its compiler is able to perform the same class of static checks as Closure. It looks a bit neater:
<p>
<pre>function toNumber(s: string) { return parseInt(s, 10); }</pre>
<p>
The main feature that TypeScript gains, as a result of being its own language and not a set of non-universally recognised comment conventions, is that IDEs recognising TypeScript should expect to be more helpful. In short, I am describing Visual Studio&#039;s intellisense.
<p>
But in reality the quality of Typescript&#039;s tooling has never appeared to be of much importance to Microsoft and it&#039;s still not very good now. Pre-1.0 was practically unusable due to the instability and performance woes that it would inflict upon Visual Studio (by which I mean several crashes per day, often preceded by periods of very high memory usage). The fact that it was ever released in that state is a worry in itself as to how serious Microsoft is about TypeScript, and despite critical issues being fixed now, it&#039;s still not really very good:
<p>
Visual Studio still requires usually one restart per day to fix performance issues (the dreaded &quot;Formatting task took more than 2 second(s) to complete. Try again later&quot; message - once you start seeing that, some key strokes take 2 seconds).
<p>
Occasionally the editor will start reporting non-existent syntax errors and refuse to compile to JS. This is extremely confusing, but it can be fixed by cutting the offending code out of the document, pressing save and letting it recompile, and then pasting it back in. Until you get wise to this, you can spend a lot of time hunting for the (non-existent) cause of the syntax error.
<p>
Overall the quality is poor enough that I&#039;m sceptical that TypeScript is getting any real everyday testing/usage by its developers. That would be ok if it was still in alpha state, but it&#039;s been public for two years now. 
<p>
You can, of course, use any editor you like to write TypeScript, but if that&#039;s the case you aren&#039;t gaining anything over Closure.
<p>
There are other annoyances that probably aren&#039;t TypeScript&#039;s fault. Debugging TypeScript is a bit less pleasant than debugging JavaScript, simply because the browser isn&#039;t executing TypeScript. Sourcemaps get you 90% of the way there, but it&#039;s still not as smooth as debugging the pure unedited JavaScript as allowed by Closure.]]></content:encoded>
  </item>
      <item>
    <title>Web Workers</title>
    <link>https://blog.asgaard.co.uk/2014/08/02/web-workers</link>
    <pubDate>Sat, 02 Aug 14 21:32:56 +0000</pubDate>
    <guid>https://blog.asgaard.co.uk/2014/08/02/web-workers</guid>
    <description><![CDATA[
<p>
Web Workers. The web&#039;s answer to threading. I&#039;m disappointed to learn, after much investigation, that for my own practical purposes, web workers are useless. This appears to be a contrarian opinion - most articles discussing web workers praise their existence while completely skipping over the practicality of actually benefiting from them.
<p>
They have two main problems:
<p>
1. A worker cannot access the DOM. This is actually less of a problem than it initially appears, because having your GUI exist only in one thread is a reasonable limitation. But sometimes building DOM really is a bottleneck and it would be nice if you could at least build isolated fragments in workers.
<p>
2. Workers cannot share data; they are entirely separate beings. In extremely contrived circumstances, you can transfer ownership of a piece of data from one thread to another. This is not a generally useful piece of functionality, because you need to be using objects that implement the Transferable interface. You&#039;re probably w[...]]]></description>
    <content:encoded><![CDATA[
<p>
Web Workers. The web&#039;s answer to threading. I&#039;m disappointed to learn, after much investigation, that for my own practical purposes, web workers are useless. This appears to be a contrarian opinion - most articles discussing web workers praise their existence while completely skipping over the practicality of actually benefiting from them.
<p>
They have two main problems:
<p>
1. A worker cannot access the DOM. This is actually less of a problem than it initially appears, because having your GUI exist only in one thread is a reasonable limitation. But sometimes building DOM really is a bottleneck and it would be nice if you could at least build isolated fragments in workers.
<p>
2. Workers cannot share data; they are entirely separate beings. In extremely contrived circumstances, you can transfer ownership of a piece of data from one thread to another. This is not a generally useful piece of functionality, because you need to be using objects that implement the Transferable interface. You&#039;re probably wondering &quot;what&#039;s the transferable interface?&quot; - exactly - it&#039;s not exactly widely used. 
<p>
This leaves you in the position that you need to pass copies of your data through to web workers. Copies are passed via a JSON-like process, by which I mean that objects get stripped of their classes, so you also probably need to use your own serialization augmentations to re-assemble data into real objects with correct prototypes. At this point, not only have you taken on a lot of extra work, you&#039;ve also most likely killed the performance of your application by bogging it down in communication overhead, which seems like an unfortunate result when the entire point of this process was to speed it up.
<p>
In short, web workers have a very narrow use case: they are useful if and only if your problem consists of a lot of computation and very little data, otherwise the communication overhead will be greater than the gains made by using more threads. I&#039;m not denying this use case exists, but I am denying that the majority of JS apps which could benefit from threads fall into this category.<hr/>
<p>
If anyone disagrees with this I&#039;d be interested to hear why. Most of the articles about web workers seem to vaguely allude to the (in my opinion extremely poor) solution of using transferable objects, but completely omit the practicality of using them, which almost makes me feel like I&#039;m missing something...]]></content:encoded>
  </item>
      <item>
    <title>Serialization in web apps/JavaScript</title>
    <link>https://blog.asgaard.co.uk/2014/04/28/serialization-in-web-apps</link>
    <pubDate>Mon, 28 Apr 14 17:48:41 +0000</pubDate>
    <guid>https://blog.asgaard.co.uk/2014/04/28/serialization-in-web-apps</guid>
    <description><![CDATA[
<p>
Serialization is a problem that pops up in a persistent software. In web apps, it announces itself as a problem at requirements just beyond CRUD. You suddenly have all these data models that need to be represented in local memory at runtime and also need to be saved persistently in some form.<h2>JSON</h2>
<p>
Serialization is actually a laborious problem that touches pretty much your entire codebase. 
<p>
You might think &quot;but JavaScript objects are basically JSON, right?&quot; Well, yes, they are. &quot;And JSON is just a serialization format, right?&quot;  Well, yes... but it&#039;s easily possible to overestimate how much it does for you (which is fine, and is not a valid criticism of JSON). 
<p>
The limitations are encountered very early on: JSON won&#039;t help you deserialize a date, and although it will help you serialize <code>var steve = new Person(&#039;Steve&#039;)</code> into <code>{&#039;name&#039;: &#039;Steve&#039;}</code>, it won&#039;t help deserialize that into something that fulfils <code>person.getName() =&gt; &#039;Steve&#039;</code>.<h2>Type</h2>[...]]]></description>
    <content:encoded><![CDATA[
<p>
Serialization is a problem that pops up in a persistent software. In web apps, it announces itself as a problem at requirements just beyond CRUD. You suddenly have all these data models that need to be represented in local memory at runtime and also need to be saved persistently in some form.<h2>JSON</h2>
<p>
Serialization is actually a laborious problem that touches pretty much your entire codebase. 
<p>
You might think &quot;but JavaScript objects are basically JSON, right?&quot; Well, yes, they are. &quot;And JSON is just a serialization format, right?&quot;  Well, yes... but it&#039;s easily possible to overestimate how much it does for you (which is fine, and is not a valid criticism of JSON). 
<p>
The limitations are encountered very early on: JSON won&#039;t help you deserialize a date, and although it will help you serialize <code>var steve = new Person(&#039;Steve&#039;)</code> into <code>{&#039;name&#039;: &#039;Steve&#039;}</code>, it won&#039;t help deserialize that into something that fulfils <code>person.getName() =&gt; &#039;Steve&#039;</code>.<h2>Types</h2>
<p>
The way I&#039;ve handled this in the past is to have a central serialization system which handles wrapping things up into JSON and unwrapping it into your application&#039;s data structures, by augmenting JSON representations with type data, and delegating down to data-structure specific implementations of serialize and deserialize.
<p>
I&#039;m not talking about the <code>JSON.stringify()</code> call here, which is trivial, I&#039;m talking about representing my model in a way that is JSON-compatible and also includes all information necessary to deserialize automatically.
<p>
For example, JSON has no date type, so we have to represent a date as a JSON primitive and do some magic either side of the serialization to convert it to a primitive and to turn it back into a JavaScript date. A serializer for date might just convert the date into a time string, like:
<p>
<pre>function serializeDate(date) {
    return date.toString();
}</pre>
<p>
This will be invoked as part of a larger routine, which uses the above function to populate the &#039;value&#039; field, and itself populates the <code>$type</code> field:
<p>
<pre>{
    &quot;$type&quot;: &quot;date&quot;, 
    &quot;value&quot;: &quot;Mon Apr 28 2014 01:00:00 GMT+0100 (BST)&quot;
}</pre>
<p>
A deserializer for this might look like:
<p>
<pre>function deserializeDate(value, cb) {
    cb(null, new Date(value));
}</pre>
<p>
which again exists as a part of a larger routine which analyses the <code>$type</code> field and finds the deserializeDate to pass the value field into.
<p>
Deserializers are best made asynchronous by default because sooner or later you&#039;ll want to represent references to objects that are stored remotely, which require a HTTP request to retrieve, and it&#039;s hard to go from synchronous to asynchronous.<h2>deserialize()</h2>
<p>
To tie everything together, you have a top level <code>deserialize(data, cb)</code> function. It&#039;s useful if this function is fairly intelligent and handles anything you throw at it. For convenience, I&#039;ve added the ability to recursively deserialize arbitrary JS maps, because later on it&#039;s useful to just throw a block of data at this function and get a block back, instead of having to queue up a lot of calls. I have not implemented the same case for Array in an attempt to keep the code brief, but you should consider doing so.
<p>
In the code below I&#039;ve used <a href='https://github.com/caolan/async'>Async</a> and <a href='http://underscorejs.org/'>Underscore</a>.
<p>
<pre>var typeMap = {
    'date' : deserializeDate
};

function deserialize(data, cb) {

    // JSON primitives are handled easily
    var primitives = {
        'string' : true,
        'boolean': true,
        'number': true
    }
    if (primitives[typeof data] || data == null) {
        // This is a primitive - we can just return it.
        cb(null, data);
        return;
    }

    var typeField = data['$type'];
    var deserializer = typeField &amp;&amp; typeMap[typeField];
    if (typeof deserializer === 'function') { 
        // This is an object conforming to our ($type, value) structure.
        deserializer(data['value'], cb);
        return;
    }

    // Handle arbitrary JS objects by recursively deserializing its contents.
    else if (data.constructor.name === ({}).constructor.name) {
        var ret = {};
        async.each(_.keys(data), function(key, cb) {
            deserialize(data[key], function(err, value) {
                if (err)  { log('Some error deserializing ', data[key]); }
                else { ret[key] = value; }
                cb();
            });
        }, function(err) {
            cb(null, ret);
        });
        return;
    }
}</pre>
<p>
I&#039;ve omitted the serialization code, but it&#039;s very much the same idea. You have a top level serializer that generates JSON-compatible objects by delegating to data-type specific serializers.<h2>Deserializing your own data structures</h2>
<p>
That was pretty easy. The harder parts come when you consider your own objects. 
<br>
Let&#039;s say you have a Person class. It&#039;s useful to have a way to merge serialized data into an existing Person (because this allows us to re-purpose our serialization code for handling live-update events), and it&#039;s also useful to have a way to create a new Person. Luckily, the second is just a trivial special case of the first.
<p>
<pre>Person.fromSerialized = function(obj, cb) {
    var p = new Person();
    p.mergeFromSerialized(obj, cb);
}

// Add this to the typeMap, so it becomes visible to deserialize()
typeMap['Person'] = Person.fromSerialized

Person.prototype.mergeFromSerialized = function(obj, cb) {

    // The correct way is to use deserialize(). This looks recursive, but 
    // the subtlety is that obj is not a typed object - it's just a plain 
    // JS map of properties. Because of this, the deserializer won't 
    // try to delegate and will just deserialize each member

    function take(object, key, defaultValue) {
        if (object.hasOwnProperty(key)) { 
            return object[key]; 
        }
        else { 
            return defaultValue;
        }
    }

    deserialize(obj, _.bind(function(data) {
        this.name = take(data, 'name', this.name);
        this.dob = take(data, 'dob', this.dob);
        cb(null, this);
    }, this));
}</pre>
<p>
take() is a helper function which returns the given key from an object unless that key doesn&#039;t exist, in which case it returns a default. It avoids a lot of if (data.hasOwnProperty()) {} blocks and makes the code a bit more legible.<h2>What about references?</h2>
<p>
We still very quickly encounter yet another case: that objects need shared references. If a Person object has a friends array, the JSON doesn&#039;t want to embed the friends in that array, it wants to just store a reference. JSON has no reference support so you need to encode your own.
<p>
In this case you need a way to refer to the object itself, not its contents.
<p>
The way you handle this is to ID persistent objects and serialize them as special <code>&#039;$ref&#039;</code> objects, and expose a method on your server to return a specific object with the given ID.
<p>
<pre>{
    &quot;$type&quot;: &quot;Person&quot;,
    &quot;value&quot;: {
         &quot;name&quot;: &quot;Steve&quot;,
         &quot;friends&quot;: [
             { 
                 &quot;$ref&quot; : {
                     &quot;$collection&quot;: &quot;People&quot;,
                     &quot;id&quot;: 3
                 }
             }
         ]
     }
}</pre>
<p>
This presents an interesting point: that you need two different serialized representations for objects that may exist in collections. One returns a normal keyed object representing the object&#039;s state, the other returns a <code>$ref</code> object. I&#039;ve found the latter case to be the generally useful one, and the former to be a special case which the caller should only invoke purposefully. Meaning: <code>serialize(myPerson) =&gt; {$ref: ... }</code>, and <code>myPerson.serialize() =&gt; {name: ..., dob: ..., friends: ... }</code>.
<p>
The deserializer for a $ref looks something like:
<p>
<pre>var collections = {};
function deserializeRef(data, cb) {
    
    var collectionName = data['$collection'],
        id = data['id'];
    if (!collections[collectionName]) {
        collections[collectionName] = {};
    }
    var collection = collections[collectionName];
    if (collection[id]) { cb(null, collection[id]); }
    else {
        yourServerApi.getCollectionElement(collectionName, id, 
                                           function(err, response) {
            if (err) { cb(err); return }
            deserialize(response, function(err, response) {
                if (err) { cb(err); return }
                collection[id] = response;
                cb(err, collection[id]);
            });
        });
    }
}</pre><h2>Minification concerns</h2>
<p>
The <code>mergeFromSerialized</code> code is still a bit irritating that we have to manually write out a line for each property.
<p>
It&#039;s tempting to rewrite it to something like this:
<p>
<pre>Person.prototype.mergeFromSerialized = function(obj, cb) {
    var myFields = ['name', 'dob']; 
    deserialize(obj, _.bind(function(data) {
        _.each(myFields, function(field) {
           this[field] = take(data, field, this[field]);
           cb(null, this);
        }, this);
    }, this));
}</pre>
<p>
Unfortunately there&#039;s a glaring problem with this code: You&#039;ve just trashed static analysis. If you are using a compiler which aggressively renames class members, your deserialization will fail, because your code will write to <code>this[&#039;dob&#039;]</code>, which your compiler has renamed to <code>this.a</code>.  
<p>
Using such an aggressive minification might not be of great importance for a smaller project, but if your compiled JS file is measuring several megabytes, it&#039;s useful to be able to be able to trim the source.
<p>
I am not really sure what the answer to this is, other than auto-generating the long form serialization source code.]]></content:encoded>
  </item>
      <item>
    <title>Angular vs jQuery</title>
    <link>https://blog.asgaard.co.uk/2014/04/03/angular-vs-jquery</link>
    <pubDate>Thu, 03 Apr 14 20:57:08 +0000</pubDate>
    <guid>https://blog.asgaard.co.uk/2014/04/03/angular-vs-jquery</guid>
    <description><![CDATA[
<p>
On HN today there is an inflammatory article called <a href='https://news.ycombinator.com/item?id=7522520'>&#039;The Reason Angular JS will fail&#039;</a>, which tl;drs to &quot;it&#039;s too complicated compared to jQuery&quot;.
<p>
It&#039;s not a well argued opinion, but it is one I&#039;m inclined to agree with.
<p>
The comments all jump on the fact it is badly argued and state that comparing Angular&#039;s complexity to jQuery&#039;s simplicity is apples to oranges because jQuery isn&#039;t suitable for building large apps, whereas Angular is.
<p>
Yeah, well, no. I don&#039;t agree there.
<p>
I&#039;m curious how many of those people have actually written a large JS app, instead of just imagined what it might involve. I&#039;ve been working on what is currently a 70kloc (and growing) JS app for the last 18 months. I have used JS frameworks before, but we don&#039;t use any on this project. I think, in general, frameworks have properties that make certain kinds of projects easier to put together, and none of them is closely related to the size of the project.
<p>
We briefly l[...]]]></description>
    <content:encoded><![CDATA[
<p>
On HN today there is an inflammatory article called <a href='https://news.ycombinator.com/item?id=7522520'>&#039;The Reason Angular JS will fail&#039;</a>, which tl;drs to &quot;it&#039;s too complicated compared to jQuery&quot;.
<p>
It&#039;s not a well argued opinion, but it is one I&#039;m inclined to agree with.
<p>
The comments all jump on the fact it is badly argued and state that comparing Angular&#039;s complexity to jQuery&#039;s simplicity is apples to oranges because jQuery isn&#039;t suitable for building large apps, whereas Angular is.
<p>
Yeah, well, no. I don&#039;t agree there.
<p>
I&#039;m curious how many of those people have actually written a large JS app, instead of just imagined what it might involve. I&#039;ve been working on what is currently a 70kloc (and growing) JS app for the last 18 months. I have used JS frameworks before, but we don&#039;t use any on this project. I think, in general, frameworks have properties that make certain kinds of projects easier to put together, and none of them is closely related to the size of the project.
<p>
We briefly looked at Angular but it seemed like it was causing us problems instead of solving them so it didn&#039;t survive long (being overly complex didn&#039;t help its case).
<p>
The statement that Angular is for big apps and jQuery is for small apps seems like a false dichotomy to me. 
<p>
We use jQuery for manipulating the DOM because without two way data binding, you&#039;d be crazy not to. Data binding is a big attraction of Angular (and others). 
<p>
I like data binding from a theoretical perspective, but in practice, generating and syncing DOM really is not really a hard or notable problem. It&#039;s just a relatively minor implementation issue. I don&#039;t think data binding <em>really</em> provides an advantage in terms of time or effort or maintainability - it seems like it should, but you revise your opinion somewhat after spending many hours debugging data binding templates. Processes relying on other people&#039;s magic are hard to debug.
<p>
Data binding is largely a distraction from things that really do take time and energy, which tend not to be problems touched on by frameworks anyway. Things like &quot;how do I make all these subtly inconsistent but nevertheless intuitive requirements sort of co-exist&quot;. Software is a lot about bridging the gap between reality and someone else&#039;s vision of how reality should be. That stuff is hard. That&#039;s the risk to a big project. Keeping your GUI and data models in sync is not.
<p>
But saying that our app is built with jQuery instead of Angular is sort of missing the point. We just use jQuery at the view level and it works very well. But that&#039;s all it does. We don&#039;t use the DOM to store state, which, I think, is the assertion usually levelled at &#039;jQuery apps&#039;. It&#039;s just a tool for generating DOM a bit easier.
<p>
What I am saying is that I am sceptical of Angular being useful for anything more than a CRUD app on steroids, but I am not sceptical at all of jQuery being useful in virtually any HTML/JS project. This is because jQuery (roughly) adopts the philosophy of &quot;do one thing and do it well&quot;, which means it&#039;s a very good fit for all the use cases it&#039;s aiming at. The same isn&#039;t true of Angular.]]></content:encoded>
  </item>
      <item>
    <title>My favourite JavaScript bug</title>
    <link>https://blog.asgaard.co.uk/2013/05/27/my-favourite-javascript-bug</link>
    <pubDate>Mon, 27 May 13 18:41:00 +0000</pubDate>
    <guid>https://blog.asgaard.co.uk/2013/05/27/my-favourite-javascript-bug</guid>
    <description><![CDATA[
<p>
To be clear, this isn&#039;t a bug *in* JavaScript, it&#039;s a bug in my own code. But one which various properties of JavaScript made it both easy to write and very hard to detect later.
<p>
I am currently writing a LOLCODE interpreter in JavaScript. It&#039;s not overly complex - it just recursively evaluates an AST directly. 
<p>
LOLCODE allows for user defined functions:
<p>
<pre>HOW DUZ I ADD YR NUM1 AN NUM2
  SUM OF NUM1 AN NUM2
OIC

VISIBLE ADD 1 AN 3 MKAY  BTW =&gt; prints 4</pre>
<p>
The AST for the function definition looks something like:
<p>
<pre>{
    &quot;_name&quot;: &quot;FunctionDefinition&quot;,
    &quot;name&quot;: &quot;ADD&quot;,
    &quot;args&quot;: [
        &quot;NUM1&quot;,
        &quot;NUM2&quot;
    ],
    &quot;body&quot;: {
        &quot;_name&quot;: &quot;Body&quot;,
        &quot;lines&quot;: [
            {
                &quot;_name&quot;: &quot;FunctionCall&quot;,
                &quot;name&quot;: &quot;SUM OF&quot;,
                &quot;args&quot;: {
                    &quot;_name&quot;: &quot;A</pre>[...]]]></description>
    <content:encoded><![CDATA[
<p>
To be clear, this isn&#039;t a bug *in* JavaScript, it&#039;s a bug in my own code. But one which various properties of JavaScript made it both easy to write and very hard to detect later.
<p>
I am currently writing a LOLCODE interpreter in JavaScript. It&#039;s not overly complex - it just recursively evaluates an AST directly. 
<p>
LOLCODE allows for user defined functions:
<p>
<pre>HOW DUZ I ADD YR NUM1 AN NUM2
  SUM OF NUM1 AN NUM2
OIC

VISIBLE ADD 1 AN 3 MKAY  BTW =&gt; prints 4</pre>
<p>
The AST for the function definition looks something like:
<p>
<pre>{
    &quot;_name&quot;: &quot;FunctionDefinition&quot;,
    &quot;name&quot;: &quot;ADD&quot;,
    &quot;args&quot;: [
        &quot;NUM1&quot;,
        &quot;NUM2&quot;
    ],
    &quot;body&quot;: {
        &quot;_name&quot;: &quot;Body&quot;,
        &quot;lines&quot;: [
            {
                &quot;_name&quot;: &quot;FunctionCall&quot;,
                &quot;name&quot;: &quot;SUM OF&quot;,
                &quot;args&quot;: {
                    &quot;_name&quot;: &quot;ArgList&quot;,
                    &quot;values&quot;: [
                        {
                            &quot;_name&quot;: &quot;Identifier&quot;,
                            &quot;name&quot;: &quot;NUM1&quot;
                        },
                        {
                            &quot;_name&quot;: &quot;Identifier&quot;,
                            &quot;name&quot;: &quot;NUM2&quot;
                        }
                    ]
                }
            }
        ]
    }
}</pre>
<p>
The easiest way to evaluate this is not to do anything special to compile that function, but just to use the magic of JS to represent the evaluation of the FunctionDefinition node as an interpreter action in its own right, similarly to how we&#039;d evaluate an identifier or any other construct.
<p>
If we can create a function which represents the evaluation of &#039;ADD&#039;, then we have a nice consistency with evaluation of built-in functions, which are implemented natively, like SUM OF, e.g:
<p>
<pre>lol = function() {
    var self = this;

    this.symbols = {
      'SUM OF': function(a, b) { return a + b; }
    }


    var evalFuncDef = function(node) {
        self.symbols[node.name] = function() {
            return self.evaluate(node.body);
        }
    }

    this.evaluate(node) {
       // delegate to appropriate sub function
       if (node._name === 'FunctionDefinition') {
           return evalFuncDef(node);
       }
    }
}</pre>
<p>
I&#039;ve omitted setting up the argument list, figuring out the return value, etc, but the basic point is that both &#039;SUM OF&#039; (a native function) and &#039;ADD&#039; (a user supplied LOLCODE function) both exist in the symbol table in the form of an executable JavaScript function.
<p>
It&#039;s a fairly innocent looking piece of code.
<p>
Except for one thing.
<p>
The interpreter also has the ability to pause the program and evaluate watch-statements, like what you&#039;d find in Firebug or Chrome&#039;s debugger. For various uninteresting reasons<sup>1</sup>, the easiest way to do this is to clone the current symbol table and other scope into another interpreter and execute it there.
<p>
Something interesting happens here.
<p>
In the above code, I&#039;ve used the this/self idiom to get a reference to the current object into a nested function.
<p>
When we clone the symbol table into a different object, the self reference comes across unchanged. What that means is that the second interpreter happily executes any expression you give it correctly, until you supply it with one that tries to invoke a user written function. At this point, the <em>first</em> kicks into action, and continues executing. Imagine how difficult to debug this was - you can trace it all you want, you will see you&#039;re always in the right functions. The key is realising you&#039;ve suddenly switched to the wrong <em>object</em>, which in this case, always has very similar (if not identical) state!
<p>
The solution to this is obvious - don&#039;t rely on self inside the function we create, instead require it to be called with <code>symbols[node.name].call(this, ...)</code>. 
<p>
But the bug itself is admirable in its subtlety.
<br>
____
<br>
1. Mostly to do with keeping track of an awkward asynchronous callback. Call it an &#039;implementation issue&#039;.
<p>
]]></content:encoded>
  </item>
  </channel>
</rss>