<?xml version="1.0" encoding="UTF-8"?> 
<rss version="2.0"
        xmlns:content="http://purl.org/rss/1.0/modules/content/"
        xmlns:wfw="http://wellformedweb.org/CommentAPI/"
        xmlns:dc="http://purl.org/dc/elements/1.1/"
        xmlns:atom="http://www.w3.org/2005/Atom"
        xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
        xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
        >
<channel>
  <title>asgaard</title>
  <description></description>
  <link>https://blog.asgaard.co.uk/2014</link>
  <lastBuildDate>Tue, 12 May 26 17:12:57 +0000</lastBuildDate>
  <language>en</language>
  <count>13</count>
  <offset>0</offset>
      <item>
    <title>When should I use a JavaScript compiler, and which one should I use?</title>
    <link>https://blog.asgaard.co.uk/2014/12/20/when-should-i-use-a-javascript-compiler-and-which-one-should-i-use</link>
    <pubDate>Sat, 20 Dec 14 13:40:47 +0000</pubDate>
    <guid>https://blog.asgaard.co.uk/2014/12/20/when-should-i-use-a-javascript-compiler-and-which-one-should-i-use</guid>
    <description><![CDATA[
<p>
There are a few different options for Compile-to-JS languages and here I&#039;m going to outline the main contenders as of late 2014.<h2>Plain JavaScript</h2>
<p>
Let&#039;s start with the basics. A lot of people who do not understand plain JS berate it for many things, but in reality its main practical drawback is that it becomes unwieldy on larger projects (like any dynamic language), and in such cases, having a checking compiler will cut down on a lot of unit testing. Plain JavaScript is <em>fine</em> for small projects, it becomes problematic as your code grows; the point at which that happens depends on the skill and discipline of your developers.
<p>
Regardless of what you choose, you&#039;re going to be working indirectly with JS and you should learn it to a fluent level.<h2>CoffeeScript</h2>
<p>
<a href='http://coffeescript.org/' target='_blank'>CoffeeScript</a> (CS) was very popular when it was released but popularity has since waned. Its only feature is syntax, which implies that there is something seriously wrong with JavaScript&#039;s syntax (subjective), but fixes it by making it com[...]]]></description>
    <content:encoded><![CDATA[
<p>
There are a few different options for Compile-to-JS languages and here I&#039;m going to outline the main contenders as of late 2014.<h2>Plain JavaScript</h2>
<p>
Let&#039;s start with the basics. A lot of people who do not understand plain JS berate it for many things, but in reality its main practical drawback is that it becomes unwieldy on larger projects (like any dynamic language), and in such cases, having a checking compiler will cut down on a lot of unit testing. Plain JavaScript is <em>fine</em> for small projects, it becomes problematic as your code grows; the point at which that happens depends on the skill and discipline of your developers.
<p>
Regardless of what you choose, you&#039;re going to be working indirectly with JS and you should learn it to a fluent level.<h2>CoffeeScript</h2>
<p>
<a href='http://coffeescript.org/' target='_blank'>CoffeeScript</a> (CS) was very popular when it was released but popularity has since waned. Its only feature is syntax, which implies that there is something seriously wrong with JavaScript&#039;s syntax (subjective), but fixes it by making it completely different (Ruby-ish) instead of identifying exact problems and improving them. It is now quite surprising to come across CoffeeScript code, which suggests that people have generally rejected CS as an improvement (an opinion I agree with). It does not really help with the fundamental problem of code management, because it&#039;s just another dynamic language. Avoid.<h2>Dart</h2>
<p>
(Google&#039;s) <a href='https://www.dartlang.org/' target='_blank'>Dart</a> is a strange one because although Dart code can compile to JavaScript, it&#039;s really designed to be run on a virtual machine, which only Chrome implements. For this reason, it is not worth evaluating further. Avoid unless you have a particular reason not to.<h2>Google Closure Compiler</h2>
<p>
<a href='https://developers.google.com/closure/compiler/' target='_blank'>Closure</a> is a checking compiler from Google, and it is interesting in that it works on pure JS code, i.e. it&#039;s not a separate programming language. Code written for Closure needs comment type annotations (which are quite ugly and cumbersome, but easily written if you are disciplined) to make use of the compiler&#039;s features. The compiler does useful things like checking that the methods you are calling really exist, instead of leaving you to find out at runtime.
<p>
The main downside of Closure is performance; there is no incremental compilation which means if you change one line of one file you have to recompile the whole thing, and it can take several minutes. I should stress that for development this makes little difference - you tend to develop using your raw code, not the compiled code, so invoking the compiler is something you do before committing code, not (always) before trying it out locally. To make full use of Closure all external libraries need to have type definitions; for popular libraries these are easy to find and it is not too hard to write these yourself (but it is an extra requirement).
<p>
Google&#039;s investment in AtScript (see below) may have implications for the future of Closure, but the compiler itself is mature and reliable as it is.<h2>TypeScript</h2>
<p>
<a href='http://www.typescriptlang.org/' target='_blank'>TypeScript</a> (TS) has been around for a few years now and only recently seems to be finding its niche, although growth is still slow. TS is a superset of JS and adds some useful syntax, but its real value is in allowing you to write strongly typed and checked code, similarly to Closure, but its standardisation allows IDEs to give you things like code completion. If your average foray into JavaScript is in writing a 50 line jQuery plugin, TypeScript is completely overkill for you. On the other hand, if you are working on a JS project with multiple developers and hundreds of files, TypeScript is <em>very useful</em>, although it is only recently that the tooling has become mature enough to handle this scale.
<p>
The tooling  (via Visual Studio) is (now) both good and stable, and the compilation step in development is fast and largely transparent. If you are using an MS environment and have access to Visual Studio, you should absolutely strongly consider using TypeScript. MS claims that tooling exists for non Windows based platforms but I cannot speak to its quality (and frankly, I&#039;d be sceptical).
<p>
Note that TS requires type definitions for external code, like Closure.
<br>
 
<br>
TS is a bit of a risk in that growth appears slow and MS might see that as a reason to discontinue it.<h2>AtScript</h2>
<p>
<a href='https://docs.google.com/document/d/11YUzC-1d0V1-Q3V0fQ7KSit97HnZoKVygDxpWzEYW0U/edit' target='_blank'>AtScript</a> is superficially not dissimilar to TypeScript, but at the moment the project is too immature to seriously evaluate how it practically differs, and because of its immaturity it would be unwise to start using AtScript any time soon.<h2>Summary</h2>
<p>
In summary, your choice for &quot;which compiler do I use?&quot; depends on project size, project length, and number of developers. I&#039;ll condense this all down to lines of code to get the point across. In order of preference:<table class='width-100'><thead><th>Project Size</th><th>Approx #lines of code</th><th>Choices</th></thead><tbody><tr><td>Tiny</td><td>1k</td><td>Plain JS</td></tr><tr><td>Small</td><td>10k</td><td>TS, Closure, Plain JS</td></tr><tr><td>Medium to very large</td><td>&gt;10k</td><td>TS, Closure</td></tr></tbody></table>]]></content:encoded>
  </item>
      <item>
    <title>Peripheral vision and security</title>
    <link>https://blog.asgaard.co.uk/2014/12/02/peripheral-vision-and-security</link>
    <pubDate>Tue, 02 Dec 14 18:46:25 +0000</pubDate>
    <guid>https://blog.asgaard.co.uk/2014/12/02/peripheral-vision-and-security</guid>
    <description><![CDATA[
<p>
Here&#039;s a bug report from the Composer project: <a href="https://github.com/composer/getcomposer.org/issues/76">https://github.com/composer/getcomposer.org/issues/76</a>
<p>
The report (rudely, but) correctly, points out that the recommended way of installing the script, by getting it via curl and piping it into an interpreter, is a security risk in the event that the source gets hacked and starts serving malicious code. That&#039;s true - I agree.
<p>
But hang on a minute, Composer is a package manager for a programming language. The whole point of Composer is that you download other people&#039;s code and then include it straight into some executable of your own! You could literally be pulling in 50 different packages, each of which suffers from the exact same vulnerability - if the source is compromised, you are executing very untrusted code (instead of merely <em>we&#039;ll trust it because it&#039;s convenient</em> code).
<p>
I&#039;m not saying you shouldn&#039;t use package managers, nor am I saying that signing scripts/executables is a bad thing, but I am saying you should be a b[...]]]></description>
    <content:encoded><![CDATA[
<p>
Here&#039;s a bug report from the Composer project: <a href="https://github.com/composer/getcomposer.org/issues/76">https://github.com/composer/getcomposer.org/issues/76</a>
<p>
The report (rudely, but) correctly, points out that the recommended way of installing the script, by getting it via curl and piping it into an interpreter, is a security risk in the event that the source gets hacked and starts serving malicious code. That&#039;s true - I agree.
<p>
But hang on a minute, Composer is a package manager for a programming language. The whole point of Composer is that you download other people&#039;s code and then include it straight into some executable of your own! You could literally be pulling in 50 different packages, each of which suffers from the exact same vulnerability - if the source is compromised, you are executing very untrusted code (instead of merely <em>we&#039;ll trust it because it&#039;s convenient</em> code).
<p>
I&#039;m not saying you shouldn&#039;t use package managers, nor am I saying that signing scripts/executables is a bad thing, but I am saying you should be a bit sensible about evaluating security risks.]]></content:encoded>
  </item>
      <item>
    <title>Bionic Boots</title>
    <link>https://blog.asgaard.co.uk/2014/11/25/bionic-boots</link>
    <pubDate>Tue, 25 Nov 14 22:47:38 +0000</pubDate>
    <guid>https://blog.asgaard.co.uk/2014/11/25/bionic-boots</guid>
    <description><![CDATA[
<p>
Here&#039;s a video of some &#039;bionic boots&#039;, intended to let humans run at 25mph, about twice as fast as an elite marathon runner: <a href="https://www.youtube.com/watch?v=VCnT-qWTE84">https://www.youtube.com/watch?v=VCnT-qWTE84</a>  
<p>
They&#039;re interesting from a technology point of view but a few things jump out at me.
<p>
<img src='/assets/img/2014-11-25/elite-running-form.jpg' class='width-100 center no-touch' alt=''/>
<p>
If you look at elite runners&#039; form you will virtually always see the front foot hitting the ground very, very close to the runner&#039;s centre of mass, while the back leg is extended at the hip. The runner&#039;s back will be straight, their hips will be pushed forwards, and they may have a slight forward lean through their entire body from the ankles upwards, but they will certainly not be bent at the waist. Those guys are distance runners but the classic &#039;shape&#039; is also recognisable in sprinters:
<p>
<img src='/assets/img/2014-11-25/sprinters.jpg' class='width-100 center no-touch' alt=''/>
<p>
It&#039;s even recognisable for Oscar Pistorius):
<p>
<img src='/assets/img/2014-11-25/pistorius.jpg' class='width-100 center no-touch' alt=''/>
<p>
The runner in the bionic boots is totally different to the point that you don&#039;t even recognise it as running:
<p>
<img src='/assets/img/2014-11-25/bionic-boots.jpg' class='width-100 center no-touch' alt=''/>
<p>
We only have the example of one person running i[...]]]></description>
    <content:encoded><![CDATA[
<p>
Here&#039;s a video of some &#039;bionic boots&#039;, intended to let humans run at 25mph, about twice as fast as an elite marathon runner: <a href="https://www.youtube.com/watch?v=VCnT-qWTE84">https://www.youtube.com/watch?v=VCnT-qWTE84</a>  
<p>
They&#039;re interesting from a technology point of view but a few things jump out at me.
<p>
<img src='/assets/img/2014-11-25/elite-running-form.jpg' class='width-100 center no-touch' alt=''/>
<p>
If you look at elite runners&#039; form you will virtually always see the front foot hitting the ground very, very close to the runner&#039;s centre of mass, while the back leg is extended at the hip. The runner&#039;s back will be straight, their hips will be pushed forwards, and they may have a slight forward lean through their entire body from the ankles upwards, but they will certainly not be bent at the waist. Those guys are distance runners but the classic &#039;shape&#039; is also recognisable in sprinters:
<p>
<img src='/assets/img/2014-11-25/sprinters.jpg' class='width-100 center no-touch' alt=''/>
<p>
It&#039;s even recognisable for Oscar Pistorius):
<p>
<img src='/assets/img/2014-11-25/pistorius.jpg' class='width-100 center no-touch' alt=''/>
<p>
The runner in the bionic boots is totally different to the point that you don&#039;t even recognise it as running:
<p>
<img src='/assets/img/2014-11-25/bionic-boots.jpg' class='width-100 center no-touch' alt=''/>
<p>
We only have the example of one person running in them, but the bio-mechanics that seem to be imposed by the bionic boots bear no resemblance to good running form, and, frankly, it looks bananas. At the point of impact both legs are flexed from the hip, even his back leg is flexed (which is bizarre), and he is bent forward from the waist. All you can really say is that his arm swing looks ok.
<p>
He is not in a good position to be absorbing the impact, however it may be that the boots themselves absorb it adequately and this isn&#039;t an issue.
<p>
Further to this, I&#039;m curious what state it leaves his hip muscles in. He&#039;s obviously using a lot of hip flexion to lift up his legs, but with his back bent forward he&#039;s never getting any hip extension. That means the muscles at the front of his legs/hips are getting worked hard, but the muscles at the back probably aren&#039;t getting used at all. Hip flexors are powerful muscles. This is an imbalance you don&#039;t want.]]></content:encoded>
  </item>
      <item>
    <title>It is too easy to introduce bugs into AngularJS apps by using third party libraries</title>
    <link>https://blog.asgaard.co.uk/2014/10/15/it-is-too-easy-to-introduce-bugs-into-angularjs-apps-by-using-third-party-libraries</link>
    <pubDate>Wed, 15 Oct 14 20:39:53 +0000</pubDate>
    <guid>https://blog.asgaard.co.uk/2014/10/15/it-is-too-easy-to-introduce-bugs-into-angularjs-apps-by-using-third-party-libraries</guid>
    <description><![CDATA[
<p>
I like Angular, in fact I like it a lot more than I ever expected I would. But one of the issues that nobody ever tells you about is how easy it is to introduce very very subtle bugs simply by using third party libraries. The basic problem is that Angular enforces certain constraints upon you which may or may not be respected by third party libraries. 
<p>
A good example, which I&#039;m sure must happen quite frequently, is a library that moves DOM around without telling you. My specific case with this class of bug was a pop-up dialog which I had templated inside an Angular scope in my DOM, which my dialog library (provided by KendoUI) then moved out to a different area of the DOM.
<p>
It starts off thus:
<p>
<pre>html
    body
        div [angular scope]
            div#dialog [dialog with ng-bindings]</pre>
<p>
and then after the dialog is shown, ends up:
<p>
<pre>html
    body
        div [angular scope]
        div#dialog [dialog with ng-bindings]</pre>
<p>
When the scope is first instantiated everything works as usual. The bindings inside th[...]]]></description>
    <content:encoded><![CDATA[
<p>
I like Angular, in fact I like it a lot more than I ever expected I would. But one of the issues that nobody ever tells you about is how easy it is to introduce very very subtle bugs simply by using third party libraries. The basic problem is that Angular enforces certain constraints upon you which may or may not be respected by third party libraries. 
<p>
A good example, which I&#039;m sure must happen quite frequently, is a library that moves DOM around without telling you. My specific case with this class of bug was a pop-up dialog which I had templated inside an Angular scope in my DOM, which my dialog library (provided by KendoUI) then moved out to a different area of the DOM.
<p>
It starts off thus:
<p>
<pre>html
    body
        div [angular scope]
            div#dialog [dialog with ng-bindings]</pre>
<p>
and then after the dialog is shown, ends up:
<p>
<pre>html
    body
        div [angular scope]
        div#dialog [dialog with ng-bindings]</pre>
<p>
When the scope is first instantiated everything works as usual. The bindings inside the dialog are evaluated correctly. But when the scope is destroyed, because the dialog has been moved, it doesn&#039;t get torn down with the rest of the template and lingers in the DOM, bindings and ng-* event handlers intact. 
<p>
When the scope is instantiated for a second time, the dialog and its ID already exist and is outside of any scope telling it it should update. Therefore what happens here is the dialog is still bound to the original instance of scope, which is defunct. When trying to interact with this dialog, it will behave fairly normally, except all the events it invokes will fire on a defunct scope. Your dialog, which is created by clicking items on your current screen, is actually performing actions on something from a previous screen!
<p>
There is an easy fix to this; if your scope uses dialogs, you make sure you remove them from the DOM when <code>$scope.on(&#039;$destroy&#039;)</code> fires.
<p>
Debugging this class of error is extremely hard. Angular makes stepping through live code a bit difficult at the best of times, and the situation you eventually end up with is 
<p>
1. $scope.variable === &#039;expected value&#039;
<br>
&lt;disconnected async code&gt;
<br>
2. $scope.variable === &#039;unexpected value&#039;
<p>
The key is to recognise that $scope.variable does not change, instead it is $scope that changes. I find it helpful to add a &#039;destroyed&#039; flag to destroyed scopes so if they show up in the debugger it&#039;s pretty obvious things have gone awry.
<p>
]]></content:encoded>
  </item>
      <item>
    <title>TypeScript and Closure</title>
    <link>https://blog.asgaard.co.uk/2014/10/05/typescript-and-closure</link>
    <pubDate>Sun, 05 Oct 14 16:42:07 +0000</pubDate>
    <guid>https://blog.asgaard.co.uk/2014/10/05/typescript-and-closure</guid>
    <description><![CDATA[
<p>
For the past few years I&#039;ve worked a lot on large JavaScript projects. JS is a very dynamic language and that means it is very difficult to maintain a reliable code base for multiple developers over long periods of time without strong tooling.
<p>
For most of that time, we&#039;ve used Google&#039;s Closure Compiler. Closure (the compiler) is extremely good and very useful. Although its main way of selling itself is to describe itself as an advanced minifier, its real value is in its strong understanding of the JS you give it and its ability to perform a lot of static checking. Closure (the library) is not something I would recommend, but the compiler is solid.
<p>
The only real downside of Closure is that it requires adding ugly annotations to your code, e.g.
<p>
<pre>/** 
  * @param {string} s
  * @return {number}
  */
function toNumber(s) { return parseInt(s, 10); }</pre>
<p>
I think that the fact it requires all these annotations to become useful is the main reason that Closure has been largely overlooked by JS developer[...]]]></description>
    <content:encoded><![CDATA[
<p>
For the past few years I&#039;ve worked a lot on large JavaScript projects. JS is a very dynamic language and that means it is very difficult to maintain a reliable code base for multiple developers over long periods of time without strong tooling.
<p>
For most of that time, we&#039;ve used Google&#039;s Closure Compiler. Closure (the compiler) is extremely good and very useful. Although its main way of selling itself is to describe itself as an advanced minifier, its real value is in its strong understanding of the JS you give it and its ability to perform a lot of static checking. Closure (the library) is not something I would recommend, but the compiler is solid.
<p>
The only real downside of Closure is that it requires adding ugly annotations to your code, e.g.
<p>
<pre>/** 
  * @param {string} s
  * @return {number}
  */
function toNumber(s) { return parseInt(s, 10); }</pre>
<p>
I think that the fact it requires all these annotations to become useful is the main reason that Closure has been largely overlooked by JS developers (i.e. &quot;too much effort&quot;). But I would argue that requiring type documentation is not really a bad thing if you want maintainable code.
<p>
The flip side of this is that Closure code is <em>just JavaScript</em>. You don&#039;t even need to invoke the compiler to test your modifications - just fire up your browser and away you go (well, unless dependencies need regenerating, but that&#039;s actually quite rare). Assuming you are using the compiler as a safety check, it&#039;s usually only necessary to use it before you&#039;re ready to commit your code.<h2>TypeScript</h2>
<p>
Next onto the scene is TypeScript. TypeScript is a compile-to-JS language by Microsoft that fulfils the same niche as Closure. It adds type information into the language syntax and its compiler is able to perform the same class of static checks as Closure. It looks a bit neater:
<p>
<pre>function toNumber(s: string) { return parseInt(s, 10); }</pre>
<p>
The main feature that TypeScript gains, as a result of being its own language and not a set of non-universally recognised comment conventions, is that IDEs recognising TypeScript should expect to be more helpful. In short, I am describing Visual Studio&#039;s intellisense.
<p>
But in reality the quality of Typescript&#039;s tooling has never appeared to be of much importance to Microsoft and it&#039;s still not very good now. Pre-1.0 was practically unusable due to the instability and performance woes that it would inflict upon Visual Studio (by which I mean several crashes per day, often preceded by periods of very high memory usage). The fact that it was ever released in that state is a worry in itself as to how serious Microsoft is about TypeScript, and despite critical issues being fixed now, it&#039;s still not really very good:
<p>
Visual Studio still requires usually one restart per day to fix performance issues (the dreaded &quot;Formatting task took more than 2 second(s) to complete. Try again later&quot; message - once you start seeing that, some key strokes take 2 seconds).
<p>
Occasionally the editor will start reporting non-existent syntax errors and refuse to compile to JS. This is extremely confusing, but it can be fixed by cutting the offending code out of the document, pressing save and letting it recompile, and then pasting it back in. Until you get wise to this, you can spend a lot of time hunting for the (non-existent) cause of the syntax error.
<p>
Overall the quality is poor enough that I&#039;m sceptical that TypeScript is getting any real everyday testing/usage by its developers. That would be ok if it was still in alpha state, but it&#039;s been public for two years now. 
<p>
You can, of course, use any editor you like to write TypeScript, but if that&#039;s the case you aren&#039;t gaining anything over Closure.
<p>
There are other annoyances that probably aren&#039;t TypeScript&#039;s fault. Debugging TypeScript is a bit less pleasant than debugging JavaScript, simply because the browser isn&#039;t executing TypeScript. Sourcemaps get you 90% of the way there, but it&#039;s still not as smooth as debugging the pure unedited JavaScript as allowed by Closure.]]></content:encoded>
  </item>
      <item>
    <title>Tight quads and hip flexors after running</title>
    <link>https://blog.asgaard.co.uk/2014/09/22/tight-quads-and-hip-flexors-after-running</link>
    <pubDate>Mon, 22 Sep 14 18:56:30 +0000</pubDate>
    <guid>https://blog.asgaard.co.uk/2014/09/22/tight-quads-and-hip-flexors-after-running</guid>
    <description><![CDATA[
<p>
I hesitate to give out such advice because it seems that for any running ailment there are a million and one potential causes and the correct treatment varies based on which affects you. But this has genuinely helped me so, here we go.
<p>
I used to get very tight quads and hip flexors (especially in my right leg about 36 hours after running), and I&#039;m pretty sure this was the cause of some medial knee pain. I&#039;ve successfully alleviated this in three ways:
<p>
1. I&#039;m more honest with myself about how slow an easy/comfortable run should be. i.e. <strong>I do most of my miles slower</strong>; we are talking 1:30-2:00/mile slower than my &#039;fast&#039; pace.
<p>
2. <strong>Form</strong>: When I run, I push my hips forward, engage my abs, and keep my back straight. There&#039;s a lot of conflicting information on running form and most of it is very hard to feel while running. These here are things you can feel <em>while</em> you&#039;re running, and it should result in more glute and hamstring activation and easier landings.
<p>
3. <strong>Hamstrings and gl</strong>[...]]]></description>
    <content:encoded><![CDATA[
<p>
I hesitate to give out such advice because it seems that for any running ailment there are a million and one potential causes and the correct treatment varies based on which affects you. But this has genuinely helped me so, here we go.
<p>
I used to get very tight quads and hip flexors (especially in my right leg about 36 hours after running), and I&#039;m pretty sure this was the cause of some medial knee pain. I&#039;ve successfully alleviated this in three ways:
<p>
1. I&#039;m more honest with myself about how slow an easy/comfortable run should be. i.e. <strong>I do most of my miles slower</strong>; we are talking 1:30-2:00/mile slower than my &#039;fast&#039; pace.
<p>
2. <strong>Form</strong>: When I run, I push my hips forward, engage my abs, and keep my back straight. There&#039;s a lot of conflicting information on running form and most of it is very hard to feel while running. These here are things you can feel <em>while</em> you&#039;re running, and it should result in more glute and hamstring activation and easier landings.
<p>
3. <strong>Hamstrings and glutes</strong>. This is the big one! I&#039;ve spent a few months training my hamstring and glute strength. Hamstrings and glutes are antagonists to the hip flexors and quads, so if the strength is imbalanced things don&#039;t work well, and, I think, the stronger muscle group ends up taking more of the workload than it should. Modern day life tends to weaken the hamstrings and glutes (with lots of sitting).
<p>
My routine looks something like this:<h3>Daily or almost daily, AND pre-running as part of a warm up/activation</h3>
<p>
<a href='http://redefiningstrength.com/best-glute-exercise-glute-bridge/'>Glute bridges</a> (3x10)
<br>
  <a href='http://www.bodybuilding.com/exercises/detail/view/name/glute-kickback'>Glute kickbacks</a> (3x8)   (note: don&#039;t be confused by the word &#039;kick&#039;, it should be a slow, controlled movement)
<br>
  <a href='https://gs1.wac.edgecastcdn.net/8019B6/data.tumblr.com/f355aaed05b51b90e1c63e0bf6c04327/tumblr_inline_n7hkr82yxz1rdvfvl.jpg'>Fire hydrants</a> (3x8)
<br>
  <a href='http://www.drsapna.com/rehab-thursdays-the-hip-hike/'>Hip hikes</a> (2x20)<h3>Every few days</h3>
<p>
<a href='http://fitnessbodygain.com/how-to-perform-deadlift-correct-form/'>Deadlifts</a>. If you don&#039;t have access to a gym then it&#039;s worth getting a pair of dumbbells. A pair of 4kgs is fine for useful (distance) running strength - you will see diminishing returns as you increase weight. Deadlifts will make you sore, so be careful about timing them around running.]]></content:encoded>
  </item>
      <item>
    <title>Fix your game, Valve.</title>
    <link>https://blog.asgaard.co.uk/2014/09/21/fix-your-game-valve</link>
    <pubDate>Sun, 21 Sep 14 14:53:44 +0000</pubDate>
    <guid>https://blog.asgaard.co.uk/2014/09/21/fix-your-game-valve</guid>
    <description><![CDATA[
<p>
<strong>Update</strong> - possible workaround: Restart Steam before launching CS:GO. It doesn&#039;t always work, but the failure rate is more towards 50% than 100%.<hr/>
<p>
At the moment Counter-Strike: Global Offensive is sporadically (un)playable due to server issues. I&#039;d be willing to cut Valve some more slack on this were it not for the fact it&#039;s been broken for about the last 7 or 8 weeks.
<p>
What happens is you try to join a game, then spend about two minutes staring at your screen while nothing happens. If you open the console up, it&#039;s full of messages like
<p>
<pre>CSysSessionClient: Unable to get session information from host
CSysSessionClient: Cannot join lobby, response 5!</pre>
<p>
Then eventually we get the very helpful error message:
<p>
<pre>Failed to create session. Please check your connection and try again</pre>
<p>
Something&#039;s gone wrong, better blame the user!
<p>
If you are persistent enough you usually can get into a game, but there&#039;s a good chance it&#039;ll be empty.
<p>
Like I said, it&#039;s been like this for weeks. It&[...]]]></description>
    <content:encoded><![CDATA[
<p>
<strong>Update</strong> - possible workaround: Restart Steam before launching CS:GO. It doesn&#039;t always work, but the failure rate is more towards 50% than 100%.<hr/>
<p>
At the moment Counter-Strike: Global Offensive is sporadically (un)playable due to server issues. I&#039;d be willing to cut Valve some more slack on this were it not for the fact it&#039;s been broken for about the last 7 or 8 weeks.
<p>
What happens is you try to join a game, then spend about two minutes staring at your screen while nothing happens. If you open the console up, it&#039;s full of messages like
<p>
<pre>CSysSessionClient: Unable to get session information from host
CSysSessionClient: Cannot join lobby, response 5!</pre>
<p>
Then eventually we get the very helpful error message:
<p>
<pre>Failed to create session. Please check your connection and try again</pre>
<p>
Something&#039;s gone wrong, better blame the user!
<p>
If you are persistent enough you usually can get into a game, but there&#039;s a good chance it&#039;ll be empty.
<p>
Like I said, it&#039;s been like this for weeks. It&#039;s not just me, the CS:GO forums and subreddit are littered with complaints, although it may be &#039;localised&#039; to Europe.
<p>
Valve (or at least, a Valve employee) <em>eventually</em>, after 5 or 6 weeks of continued problems, responded to this on an <a href='https://list.valvesoftware.com/pipermail/csgo_servers/2014-September/009610.html' target='_blank' ref='nofollow'>obscure mailing list</a>:<blockquote>
Hi all,<br/><br/>

The issue with sessions has been becoming a real problem with the growth of CS:GO user base. We are aware of the issue and are working on changing the core systems to scale better with the size of CS:GO community. Unfortunately this is not a quick rewrite, and there isn&#039;t an easy workaround that we can offer at the moment. We&#039;ll be sure to include a release note when the update resolving session issues is ready, but don&#039;t have an exact date to announce yet.<br/><br/>

Thanks,<br/>
-Vitaliy<br/>

</blockquote>
<p>
but this is a bit ridiculous. Scaling issues don&#039;t suddenly happen unless you have sudden unexpected growth (which CS hasn&#039;t); they are forseeable far in advance of becoming fatal problems. The idea that the matchmaking system is so complicated that it requires weeks/months of work to fix also seems questionable, and even if that were true, you <em>could</em> fix it in the short term by just throwing hardware resources at the problem. 
<p>
If you&#039;re thinking of buying CS:GO, don&#039;t. 
<p>
]]></content:encoded>
  </item>
      <item>
    <title>Recruiter spam</title>
    <link>https://blog.asgaard.co.uk/2014/08/28/recruiter-spam</link>
    <pubDate>Thu, 28 Aug 14 15:19:32 +0000</pubDate>
    <guid>https://blog.asgaard.co.uk/2014/08/28/recruiter-spam</guid>
    <description><![CDATA[
<p>
&quot;Middleweight Developer Opportunity upto £24K Salary&quot;
<p>
then later in the email:
<p>
&quot;We&#039;re looking for a developer who is highly motivated, dedicated and eager to learn&quot;
<p>
Ok, that&#039;s not really how economics works.[...]]]></description>
    <content:encoded><![CDATA[
<p>
&quot;Middleweight Developer Opportunity upto £24K Salary&quot;
<p>
then later in the email:
<p>
&quot;We&#039;re looking for a developer who is highly motivated, dedicated and eager to learn&quot;
<p>
Ok, that&#039;s not really how economics works.]]></content:encoded>
  </item>
      <item>
    <title>Web Workers</title>
    <link>https://blog.asgaard.co.uk/2014/08/02/web-workers</link>
    <pubDate>Sat, 02 Aug 14 21:32:56 +0000</pubDate>
    <guid>https://blog.asgaard.co.uk/2014/08/02/web-workers</guid>
    <description><![CDATA[
<p>
Web Workers. The web&#039;s answer to threading. I&#039;m disappointed to learn, after much investigation, that for my own practical purposes, web workers are useless. This appears to be a contrarian opinion - most articles discussing web workers praise their existence while completely skipping over the practicality of actually benefiting from them.
<p>
They have two main problems:
<p>
1. A worker cannot access the DOM. This is actually less of a problem than it initially appears, because having your GUI exist only in one thread is a reasonable limitation. But sometimes building DOM really is a bottleneck and it would be nice if you could at least build isolated fragments in workers.
<p>
2. Workers cannot share data; they are entirely separate beings. In extremely contrived circumstances, you can transfer ownership of a piece of data from one thread to another. This is not a generally useful piece of functionality, because you need to be using objects that implement the Transferable interface. You&#039;re probably w[...]]]></description>
    <content:encoded><![CDATA[
<p>
Web Workers. The web&#039;s answer to threading. I&#039;m disappointed to learn, after much investigation, that for my own practical purposes, web workers are useless. This appears to be a contrarian opinion - most articles discussing web workers praise their existence while completely skipping over the practicality of actually benefiting from them.
<p>
They have two main problems:
<p>
1. A worker cannot access the DOM. This is actually less of a problem than it initially appears, because having your GUI exist only in one thread is a reasonable limitation. But sometimes building DOM really is a bottleneck and it would be nice if you could at least build isolated fragments in workers.
<p>
2. Workers cannot share data; they are entirely separate beings. In extremely contrived circumstances, you can transfer ownership of a piece of data from one thread to another. This is not a generally useful piece of functionality, because you need to be using objects that implement the Transferable interface. You&#039;re probably wondering &quot;what&#039;s the transferable interface?&quot; - exactly - it&#039;s not exactly widely used. 
<p>
This leaves you in the position that you need to pass copies of your data through to web workers. Copies are passed via a JSON-like process, by which I mean that objects get stripped of their classes, so you also probably need to use your own serialization augmentations to re-assemble data into real objects with correct prototypes. At this point, not only have you taken on a lot of extra work, you&#039;ve also most likely killed the performance of your application by bogging it down in communication overhead, which seems like an unfortunate result when the entire point of this process was to speed it up.
<p>
In short, web workers have a very narrow use case: they are useful if and only if your problem consists of a lot of computation and very little data, otherwise the communication overhead will be greater than the gains made by using more threads. I&#039;m not denying this use case exists, but I am denying that the majority of JS apps which could benefit from threads fall into this category.<hr/>
<p>
If anyone disagrees with this I&#039;d be interested to hear why. Most of the articles about web workers seem to vaguely allude to the (in my opinion extremely poor) solution of using transferable objects, but completely omit the practicality of using them, which almost makes me feel like I&#039;m missing something...]]></content:encoded>
  </item>
      <item>
    <title>Steam vs Amazon Prices</title>
    <link>https://blog.asgaard.co.uk/2014/06/20/steam-vs-amazon-prices</link>
    <pubDate>Fri, 20 Jun 14 08:29:03 +0000</pubDate>
    <guid>https://blog.asgaard.co.uk/2014/06/20/steam-vs-amazon-prices</guid>
    <description><![CDATA[
<p>
Everyone on Reddit seems to take it as unspoken truth that Steam is cheap. Maybe this is true in America.
<p>
Here in the UK, South Park: The Stick Of Truth is currently on sale (33% off) at £26. On Amazon UK, it is full price at £22. I&#039;m slightly underwhelmed.[...]]]></description>
    <content:encoded><![CDATA[
<p>
Everyone on Reddit seems to take it as unspoken truth that Steam is cheap. Maybe this is true in America.
<p>
Here in the UK, South Park: The Stick Of Truth is currently on sale (33% off) at £26. On Amazon UK, it is full price at £22. I&#039;m slightly underwhelmed.]]></content:encoded>
  </item>
  </channel>
</rss>