Notes from Daily Encounters with Technology RSS 2.0
 
# Sunday, July 26, 2015

Having a site automatically deployed from a Git branch can be convenient, but I don't feel all that comfortable comfortable deploying a commit before successfully running all the tests. Of course, this can easily be achieved by first committing to a different branch, running all the tests next and only then merging the commit to the branch, the deployment is done from. Being lazy, I don't want to do the merging manually - that's what I have my continuous integration server for. If it's already running the tests, it should do the merging, as well.

TeamCity has built-in support for gated commit build pattern in the form of pre-tested commits. Unfortunately, to make them work, you need to use a supported IDE or a command line tool. That's why I decided in favor of an alternative approach: automatic merge feature. It took some experimentation to configure it correctly, but I like how it turned out in the end. I'm writing down the steps I had to make, in case I ever want use it in another project.

For the automatic merge feature to work at all, TeamCity must monitor both the source and the destination branch. I decided to deploy from the deploy branch and keep committing my work to master branch. Relevant settings are parts of VCS Root configuration (you need to Show advanced features for Branch specification field to show up):

VCS Root Branch Specification

Failing to have your destination branch watched, makes TeamCity unaware of it and results in the following error:

Automatic merge failed: Cannot find destination branch to merge into: no VCS branch maps to the 'deploy' logical branch name according to the VCS root branch specification.

Since TeamCity will now start committing to your source control, you might also want to change the username it does it with. For Git, it uses the following default value: username <username@hostname>. This can be configured with another advanced VCS root feature: Username for tags/merge.

With all that configured, it's time to add the Automatic merge build feature:

Add build feature: Automatic merge

Most of the configuration is pretty straightforward:

  • Watch builds in branches must contain source branch(es) filter: +:master in my case
  • Merge into branch must contain the destination branch: deploy in my case
  • You only want to Perform merge if: build is successful
  • Merge policy is up to you; I decided to Always create merge commit

I had problems with the default Merge commit message: parameter references failed to resolve properly (e.g. %teamcity.build.branch% always resolved to <default> instead to the source branch as one would expect, and %build.number% always resolved to 1). I can live with a fixed message, though. Git's merge tracking works good enough for my needs.

Sunday, July 26, 2015 11:41:24 AM (Central European Daylight Time, UTC+02:00)  #    Comments [0] - Trackback
Software | Git | TeamCity
# Sunday, July 19, 2015

One of the most daunting parts of replacing my current blogging platform DasBlog by a site created with DocPad, is the migration of existing content. Being a software developer, I wanted to automate as much of the process as possible. Even if the total time required wouldn't be all that much shorter, I'd rather spend it writing scripts and learning new tools and technologies, than doing mundane tasks.

The most critical part of the conversion was the switch from HTML content in DasBlog to Markdown content in my DocPad site. Although DocPad would have allowed me to use a different source format for old blog posts, this would make it much more difficult to use consistent styles across all posts. I soon realized, there are not all that many options available for converting HTML to Markdown. In the end I chose to-markdown for JavaScript by Dom Christie. Surprisingly enough, installing it turned out to be the most challenging part.

The next obstacle was getting the content out of DasBlog. At first I wanted to use its internal dayentry.xml files directly, but they don't seem to contain the permalinks for blog posts. I tried looking at DasBlog sources, but quickly decided to search for an alternative solution. I stumbled across an interesting tool for exporting DasBlog content to BlogML. Unfortunately the original link to its sources didn't work anymore, so I had to settle with a binary only repository I happened to find.

Selecting to-markdown as my conversion library automatically meant, I was going to write my conversion script in Node.js. Although I probably could have written a command line script directly in Node.js, I have decided to rather take advantage of Grunt, since I had more previous experience with it. Its direct support for CoffeeScript scripts was just an extra bonus. I even knew already how to debug the script in WebStorm - my favorite IDE for JavaScript stack.

Writing a custom Grunt multi task is simple enough:

module.exports = (grunt) ->
  grunt.initConfig(
    {
      blogml2docpad:
        convert:
          src: './DasBlog.xml'
          dest: './DocPad/'
    }
  )

  grunt.registerMultiTask('blogml2docpad', \
  'Convert BlogML file to Markdown files for DocPad', ->
    grunt.log.writeln('src : ' + this.data.src);
    grunt.log.writeln('dest : ' + this.data.dest);
  )

  grunt.registerTask('default', ['blogml2docpad'])

I was ready to read the contents of DasBlog.xml file - my BlogML export of DasBlog content. I chose xml2js as XML parser and dumped all blog posts to console, unmodified:

exportPost = (post) ->
  grunt.log.writeln(post.content[0]._);

grunt.registerMultiTask('blogml2docpad', \
'Convert BlogML file to Markdown files for DocPad', ->
  fs = require 'fs'
  xml2js = require 'xml2js'

  parser = new xml2js.Parser

  data = fs.readFileSync this.data.src
  parser.parseString data, (err, result) =>
    exportPost post for post in result.blog.posts[0].post
)

You might have noticed, how I accessed the /blog/posts element in the XML document: result.blog.posts[0].post. I found the syntax not quite intuitive and had to inspect the result object in the debugger for some time, before getting used to it.

Converting that HTML to Markdown couldn't have been easier:

exportPost = (post, params, linkMappings, imgMappings) ->
  toMarkdown = require 'to-markdown'
  grunt.log.writeln(toMarkdown(post.content[0]._));

Once I got the basic conversion working, it was time to write the posts to files which could be used by DocPad, instead of just dumping them to console. Since I already settled on the filename structure for blog posts (<date>-<title>.html.md, e.g. 20150719-MigratingDasBlogContentToMarkdownBasedDocPadSite.html.md for the post you're currently reading), I had to generate such filenames based on post metadata in BlogML:

createFilename = (post) ->
  moment = require 'moment'
  slug = require 'slug'
  titleCase = require 'title-case'

  datePrefix = moment(post.$['date-created']).format('YYYYMMDD')
  titleSlug = slug(titleCase(post.title[0]._), '')
  datePrefix + '-' + titleSlug + '.html.md'

exportPost = (post) ->
  fs = require 'fs'
  toMarkdown = require 'to-markdown'

  filename = createFilename post
  contents = toMarkdown post.content[0]._
  fs.writeFileSync params.dest + filename, contents

Since my site needs some metadata about the posts to display them correctly, it's not enough to have just the blog post in the file, a YAML header with metadata is required, as well. This is how it looks for this blog post:

---
title: "Migrating DasBlog Content to Markdown Based DocPad Site"
date: 2015-07-19
description: "One of the most daunting parts of replacing my current blogging platform 
DasBlog by a site created with DocPad, is the migration of existing content. Being a 
software developer, I wanted to automate as much of the process as possible. Even if 
the total time required wouldn't be all that much shorter, I'd rather spend it writing 
scripts and learning new tools and technologies, than doing mundane tasks."
tags:
 - DasBlog
 - CoffeeScript
 - JavaScript
 - Grunt
 - Node.js
---

I installed yamljs and got to work:

createHeader = (post) ->
  moment = require 'moment'
  header =
    title: post.title[0]._
    date: moment(post.$['date-created']).format('YYYY-MM-DD')
    description: ''
    tags: category.$['ref'] for category in post.categories[0].category \
    if post.categories[0].category # no tags for posts without categories

exportPost = (post) ->
  os = require 'os'
  yaml = require 'yamljs'
  fs = require 'fs'
  toMarkdown = require 'to-markdown'

  filename = createFilename post
  header = createHeader post
  contents = '---' + os.EOL + yaml.stringify(header) + '---' + os.EOL + 
    toMarkdown(post.content[0]._)
  fs.writeFileSync params.dest + filename, contents

That's it! My existing content was exported from DasBlog and converted to a new format, expected by my DocPad site. To be honest, I did some additional post-processing on post contents to transform internal links to other posts, images and downloads, but including that code here would just obscure the core of the solution. In case you're curious, I just prepared the mappings of old URLs to new ones and used regular expressions to apply them to Markdown text.

Sunday, July 19, 2015 12:42:01 PM (Central European Daylight Time, UTC+02:00)  #    Comments [0] - Trackback
Development | CoffeeScript | Grunt | Personal | Website
# Sunday, July 12, 2015

Installing a npm package can consist of much more than just simply downloading a bunch of files and dropping them somewhere on your local disk. The packages need not only be written in JavaScript, they can also include native Node.js modules. Since we're talking about a platform agnostic environment, the packages don't include precompiled binary files; the sources need to be compiled when the package is installed on the local computer.

Packages with native modules depend on the node-gyp toolkit for compiling the sources. Of course the toolkit by itself doesn't include all the required dependencies for compilation; there's a list of prerequisites to make it work on each of the supported platforms. In spite of that I still found it a bit challenging to make it work in Windows; in particular when I wasn't dealing with a freshly installed machine and there were different development tools already installed.

Installing Python is the easy part of the job. Just make sure you're installing version 2.7, and don't forget to select the feature for adding python.exe to path. Alternatively you can create an environment variable named PYTHON and set its value to the full path of python.exe on your computer. If you choose to install Python using Chocolatey, this is going to be taken care of automatically.

Add python.exe to Path

The troubles start with the C++ compiler tooling. At least for 64-bit Node.js installations (is anybody still using anything else?), installing Windows SDK 7.1 only turned out to be the simplest approach for clean machines. I opted for the default set of features just to be sure, although probably not everything is really needed. Unless disk space or bandwidth is an issue, I recommend you do the same. After the SDK is installed just run npm from the Windows SDK 7.1 Command Prompt or make sure you run the same SetEnv.cmd from your command prompt before using npm.

Unfortunately the SDK installation tends to fail with weird errors, if newer development tools are already installed on the machine - different versions of Visual C++ 2010 components in particular. You'll most likely need to uninstall them all for the setup to succeed. And even after it does - you'll still need to have the right C++ compiler tooling configured in your command prompt. Running Windows SDK 7.1 Command Prompt of course remains an option, but having it configured in your standard command prompt and switching between different compilers quickly becomes an issue.

As it turns out, there is another way to make node-gyp compilation work if you already have more recent C++ compiler tooling configured on your computer. By default Visual C++ 2010 compilers are used, but there is a command line switch to force a different version: msvs_version. To use Visual C++ 2013 tooling, for example, the following command does the trick:

npm install contextify --save --msvs_version=2013

The same option can be used to restore npm dependencies on a different machine:

npm install --msvs_version=2013

I like this approach a lot, since there's less additional software to install, and less possibilities to break something else in the process of making this work.

Sunday, July 12, 2015 10:05:52 PM (Central European Daylight Time, UTC+02:00)  #    Comments [0] - Trackback
Development | C++ | JavaScript | Software | Windows
# Saturday, July 4, 2015

Add New Debug ConfigurationAs soon as I started developing my first Grunt multi task, it became obvious that being able to debug it, will shorten my development time a lot. Since I've been recently using WebStorm as my JavaScript IDE of choice, I also wanted to be able to debug my task directly inside it. To no surprise, WebStorm has built-in support for debugging all kinds of Node.js scripts, including Gruntfiles. It's only a matter of getting used to WebStorm's debugging workflow.

Although there's support for temporary debug configurations which can be created ad hoc, this approach can't be used for debugging Grunt scripts. A proper debug configuration needs to be created for that, using Run/Debug Configurations dialog (accessible via Run > Edit Configurations... menu). The little "plus" icon in its top left corner opens a menu with a selection of preconfigured templates for debugging different types of Node.js scripts. Among them is also Grunt.js - the one that we need to use.

If you have your Node.js environment correctly setup and grunt-cli package installed globally, two important fields should already be prefilled for you:

  • Node interpreter should point to your Node executable (C:\Program Files\nodejs\node.exe in my case)
  • Grunt-cli package should point to where you have the package globally installed (C:\Users\damir\AppData\Roaming\npm\node_modules\grunt-cli in my case)

The only things left for you to fill in are the configuration Name (up to you) and the path to the Gruntfile you want to the debug. Optionally you can also set the Tasks to run if you want to debug anything else than default.

Grunt Debug Configuration

If you have written your Grunt script in JavaScript, then that's all you need to do. You can now set a couple of breakpoints and start debugging using the Run menu or the Debug tool window (the latter will open automatically once you start debugging, if you have it closed). If your Gruntfile is written in CoffeeScript, you're not done yet. Even though Grunt can run CoffeeScript scripts directly, WebStorm will report errors if you set a CoffeeScript Gruntfile in your debug configuration.

My next attempt was starting the Node interpreter directly and attaching to it from WebStorm. Not only was that much less convenient (having to manually start Node from command line every time), the mapping to CoffeeScript source was also mismatched, which made the approach almost useless.

It turned out that WebStorm's file watchers were the right way to go. You will first need to configure a CoffeeScript file watcher in the Settings dialog. Select Tools > File Watchers node in the tree view on the left (you can search for file watchers, to find it quicker) and click on the "plus" icon in the top right corner to add a new file watcher, based on CoffeeScript template. Again, almost everything is preconfigured for you; only the Program must be chosen manually. I suggest you install the coffee-script NPM package globally and point to coffee.cmd file it installs in the root of your global package installation directory (C:\Users\damir\AppData\Roaming\npm in my case).

Now your CoffeeScript Gruntfile will automatically be converted to a JavaScript one, whenever WebStorm detects changes. A JavaScript source map will be generated alongside it, allowing you to debug the original CoffeeScript file instead of the converted JavaScript one. You will still need to point to the JavaScript Gruntfile in the WebStorm debug configuration, though. Also, to make sure the converted file is always up-to-date, you should require the debug configuration to Run File Watchers before launching the script.

Run File Watchers Before Launch

Now, you will be able to debug your CoffeeScript Gruntfile just as if it was written in JavaScript.

Saturday, July 4, 2015 2:24:12 PM (Central European Daylight Time, UTC+02:00)  #    Comments [0] - Trackback
Development | CoffeeScript | Grunt | JavaScript | Software | WebStorm
# Friday, June 26, 2015

Roberto Vespa: SignalR Real-time Application CookbookI was quite surprised to receive a review request for a book that was released more than a year ago: SignalR Real-time Application Cookbook by Roberto Vespa. Although I already bought this book some time ago, I somehow never got around to actually reading it. This review request was just the push I needed, to finally do it.

If you have read any of my previous reviews, you might already know, that I'm not really a fan of the cookbook format of books. This one suffers from the exact symptoms which make me dislike most of such books: a significant part of the book is dealing with non-related technicalities, needed to make the recipes work, and too much of the on-topic content gets repeated over and over again, even though the author is trying to minimize that.

In spite of that, this is still a great first book on SignalR to read. It manages to cover all of key SignalR topics, and doesn't stop at the most common scenarios, although most of the time is spent on them, as it should be. You won't learn only about web clients and hosting the server in IIS, but also about .NET based clients and self-hosting. It doesn't stop at the high-level hub API, but explains low-level connections as well. It doesn't ignore the more advanced topics either: authorization, dependency injection, backplanes and extensibility.

The last couple of recipes were the ones I liked the most; every one of them applying SignalR to a different real-world scenario. It was great to see examples going beyond the usual real-time chat. Their real value should be in giving the reader additional ideas, how to take advantage of the framework in less obvious cases. Still, don't expect the book to give you a really in-depth look at SignalR. It mostly focuses on using it, not understanding its internals. Different transport strategies are only briefly touched, and the author doesn't even attempt to go into explaining the "magic" that's happening behind the curtains.

If you're looking for a book to learn SignalR from, you can't go wrong with this one. On the other hand, if you're already fluent in SignalR and just want to learn more, it probably isn't your best choice, unless you're interested in one of the above mentioned topics.

The book is available on Amazon and sold directly by the publisher.

Friday, June 26, 2015 12:03:31 PM (Central European Daylight Time, UTC+02:00)  #    Comments [0] - Trackback
Development | ASP.NET | JavaScript | Personal | Reviews
My Book

NuGet 2 Essentials

About Me
The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

All Content © 2015, Damir Arh, M. Sc. Send mail to the author(s) - Privacy Policy - Sign In
Based on DasBlog theme 'Business' created by Christoph De Baene (delarou)
Social Network Icon Pack by Komodo Media, Rogie King is licensed under a Creative Commons Attribution-Share Alike 3.0 Unported License.