{ Will Richardson }

Me RSS

Tested: Apple Won't Make a Touch MacBook

Norm and Jeremy on the Tested.com podcast have frequently complained that Apple doesn’t make laptop with touchscreens. This past week they stated that it was almost an inevitability that there would be a touchscreen MacBook some time in the future. I think this is unlikely and most definitely not something that will be released anytime soon.

Before I go any further, just a note: I have not owned a laptop with a touchscreen (I do use a Pixel C like a laptop frequently, though). But when has having no experience in a subject stopped anyone from voicing their opinion on the internet?

Why am I so adamant that there will be no touch MacBooks? The answer is simple: macOS. MacOS/ OS X is not designed to accept imprecise inputs from a touchscreen - the touch targets are far too compact. The size of the window chrome on macOS has typically been smaller than the size of Window’s windows.

This was mentioned in an interesting post by the Chrome design team, which ran through the process of redesigning the Chrome UI across all platforms. The height of the chrome was significantly larger on Windows.

Most of the time I have spent using touchscreen laptops has been debugging group project code on team mates computers. This meant using IntelliJ - which has its fair share of menus and toolbar buttons - all designed for use with a mouse. Naturally because of the novelty of having a touchscreen (or the mediocre quality of the trackpads) I used it instead of the trackpad.

IntelliJ is basically unusable on the touchscreen, the menus and buttons are too small to hit. Navigating nested menus is not at all pleasant. Anyone that has used a Mac knows that most normal applications have all their actions in the menu, and common actions can be placed in the toolbar of the application. The minimum recommended size for a touch target on iOS is 44 by 44 points whereas the recommended size of toolbar items on macOS is “at least 19x19 points” the actual clickable area is slightly larger than this at 36 by 24 points. Menus are a similar story - they are only 30 points high.

For a touchscreen Mac to be a good user experience, macOS’s entire UI would have to be redesigned. This would mean a massive amount of work for third-party developers (maybe not so much for those that just use entirely system controls) and probably leave a sad collection of apps that look out of place in the new OS.

Of course Apple has not shied away from making massive changes that require significant work to support by developers (switching to Intel, introducing retina displays, Yosemite redesign, etc). However given fairly small changes to the Mac lineup, any major change seems unlikely.

This is coming from someone that uses the terminal to find files more often than Finder, and uses their Mac mostly for development. So just perhaps my useage is not quite the norm. Although almost everything that I do that is not development is done on my Pixel C.

I think Apple’s answer to people that want a touchscreen laptop is the iPad Pro. And no, they will not merge macOS and iOS.

Bluetooth is Great, Until it's Not

Did you hear that Apple removed the headphone jack from the latest iPhone? Oh you did, well that’s a relief. What do you think? Have you sworn to never use an Apple device ever again? Well I thought I would care - but I don’t any more.

Let’s rewind a bit. Most of my time listening to audio on my phone since 2013 has been podcasts. All the great shows. I had been using a beat-up pair of Apple Earpods, as they fit my ears better than any other in-ear headphones. However having to wiggle the cable every few minutes gets old fast, and I was on the lookout for a replacement.

I ended up looking at the Urbanears Plattan ADV, and the Marley Positive Vibration headphones. Both are fairly reasonably priced and look good. When I went to buy them, I found that because of a sale the Marley Rebel BT headphones were the price that I was expecting to pay for the OTHER MARLEY phones.

So I am now the proud owner of some bluetooth headphones, and the lack of a cable is liberating. I am no longer concerned about how my phone sits in my pocket, or if I leave it on my desk when I jump up to get something, or how the cable will tangle with the strap on my bag. I am a satisfied customer.

Of course it’s not all good. Bluetooth pairing is a scary business. Bluetooth devices don’t like sharing. Connecting headphones that are paired to my phone connect to my tablet is a recipe for disaster. If I did, then they would start auto-pairing to my tablet when I turned them on. Then I would have to venture into the settings each time I used my headphones. Having a cable for does make this easier - if I’m using my tablet or laptop then it’s unlikely that the cable will get in the way, so limiting the bluetooth to my phone is not a big deal.

What would be ideal would be a pair of headphones that could accept a few different inputs and either combine them all or select the most recent one - so you could be listening to a podcast on your phone, then play a video on your tablet, and the phone would be told to pause while the video plays. That would be ideal, although it would mean that the headphones would either have to be in constant pairing mode to connect to new devices as they come in range, or require some button-press to look for a new device.

Of course, all good things must come to a saddening end - the more technology you add to something, the more ways it can break. So when I turned my headphones back on in preparation for the skate back home, instead of making the comforting “boo-doop” to indicate they are on and paired, they went “beeeeeeeeeeeeeeeeeeeeeeeep booooooooooooooop zero zero zero zero zero zero zero zero zero zero zero zero zero zero zero zero zero zero…” (yes, they literally had a computer voice saying “zero” over and over. Plugged in they worked fine. But any sign of bluetooth working was gone. Back to the shop to claim that return policy!

So now I have a new pair and right now they are working fine (I’m listening to some Mutemath as I write this - playing via my phone while I write on my tablet). But they weren’t without their own issue - for a few days they decided that they would only connect to my phone if I manually told my phone to connect to them. They then completely out of the blue (heh) started to connect automatically. Great for me, but damn weird.

Overall I am happy with the fact that my “daily driver” headphones are wireless - I will still use my Sennheiser over-ears if I’m working on my laptop at home. The ability to leave my phone sitting on a table, to have it facing whichever way in my pocket, and to get rid of the possibility of the cable catching on things makes wearing headphones more seamless in almost all situations. However while bluetooth is great when it’s working, it has plenty of opportunities to stop working and become less convenient than just plugging in a cable. And of course this means another device that needs charging - meaning I now have six devices that require charging regularly.

Basics of Functional Programming

As someone who enjoys learning new programming languages, it was only a matter of time before I came across functional programming languages, higher order functions, and the like. Earlier this year I found out that Java 8 now supports some functional programming and have been writing less boilerplate code ever since - much to the horror of my team mates. So this is for you, so you can hopefully understand my spaghetti of lambdas.

Functional programming is based around the idea of passing code around just like you would any other object. If you’re into design patterns, it’s like you’re using a very loose version of the Strategy pattern or the Template pattern. You provide a set of instructions that will be inserted into an existing algorithm or operation.

Most languages that support higher-order functions (functions that take code as a parameter) have three ‘bread and butter’ functions built-in: map, filter, and reduce. These simplify common list operations by abstracting away the boilerplate.

Map

Let’s say that I have a list of countries, and I want to present them to a user in a certain format. This is a faily common example where I have a list and I want to do an operation on each of its elements to produce a new list. You could say that there will be a mapping from each element in the first list to the element in the second list. In first year you are told to do something like this:

countries = # Some list of country objects
country_names = []
for country in countries
    country_names.push(country.name)
end
# Do something with the list of countries

However a far more succinct way of doing this is to map the list:

countries = # some list of countries
country_names = countries.map { |country| country.name }

Both methods are doing the same thing, but (for someone who understands functional programming) the second is much clearer and reduces the amount of noise in the code. Of course the disadvantage is that it can hide potentially costly operations.

An important note with map is that the operation should affect the object that you are mapping. For example if you map the countries to get all their names, but also reset some attribute of the country - you’re asking for problems in the future. If someone later decides that they only want to get the names of the first ten countries and you were relying on the fact that some other action is performed on all of them - problems are inbound.

Filter

Filter treats your function like a sieve - everything that it accepts is let through, the rest is ignored. So in this case your lambda is taking an item and returning true if you want that item to make it through the sieve. Filter reduces even more boilerplate:

let numbers = [1, 2, 5, 6, 9]
var even_numbers = [Int]()
for number in numbers {
  if number % 2 == 0 {
    even_numbers.append(number)
  }
}
// Do something with the even ones
let numbers = [1, 2, 5, 6, 9]
let even_numbers = numbers.filter { number in number % 2 == 0 }
// Do something with the even ones

You can of course chain filter statements together, or include a few conditions - basically like an SQL WHERE clause. Filter is especially useful when you have a list of objects, and you want to get rid of the ones that are null.

Reduce

When you have a list of items and want to distill it down to one object that represents some aspect of the whole list, reduce is what you’re looking for. The lambda takes two arguments - the reduced list so far, and the item that you want to reduce ‘into’ this reduced form. Reduce also takes an intial value, which is what the reduced form should start off as. A great example is summing a list of numbers - the initial reduced form is 0, and each time you want to add the current number to that.

numbers = [1, 2, 3, 6, 7]
sum = numbers.reduce(0, { |so_far, number| so_far + number })

Reduce is hard to explain - mainly because I don’t end up using it very often. Most languages include helpers for the common reduce operations: join, sum, and product are great examples. Each take a list and give you back a single value that is the combination of every item in the list.

If you think about it, both map and filter can be implemented using reduce - making reduce the only list operation you really need. So really map and filter are just helpers the common cases of reduce.

Let’s make a lambda!

So with all this knowledge, how do you go about using it? Well…

In Ruby any method that accepts a block (Ruby has lots of names for it’s anonymous functions) can be followed by a code block, either with do ... end or { ... }

In Swift closures are a type (defined by their arguments and the type they return) and like ruby can either by inside the argument list, or after the function call if the argument is at the end.

Java doesn’t really support lambdas. They are instead an anonymous implementation of an interface that has just one method. So a lambda that turns a country into a string of the country name is actually a an implementation of the generic interface Function<T, R>, (ie it’s type is Function<Country, String>) and it has a method R apply(T t) that takes in a value of type T and returns a result of type R. The code in the lambda provides the implementation of this method.

All of the list operations are hidden in the stream() method on lists, as well as the Steam.of() method that can create a stream from an Array. To turn your stream back into a list, you’ll want the .collect(Collectors.toList()) method. So the country to coutry name would look something like:

List<Country> countries = // Some list from somewhere
List<String> names = countries
    .stream()
    .map(country -> country.getName())
    .collect(Collectors.toList());

(Of course Java manages to still make a one line function into four)

Method references

If you functionally program enough, there will be some boilerplate - like creating a lambda that just calls one method on an object. So you can often just refer to that method, rather than writing out the whole lambda declaration:

(item) -> item.method()
// Can be replaced with
Item::method
{ |item| item.method() }
# Can be replaced with
&:method

If you want to learn more functional programming, Haskell, Clojure (Or Common LISP), and Elixir are all interesting.

Making Slackbots

This semester, for my group project I made a slackbot to select people for code reviews and generally be a nuisance in our slack group. I split it out into a gem which could be used to integrate easily with the slack real time messaging API. All it really does is provide a wrapper around the websocket connection and calls methods according to the type of the update received (typically the only update you care about is ‘message’ so there will only be one method). It may just be a wrapper, but it is my wrapper and I’m very pleased at how easy writing and maintaining the SENG group bot is.

Fast forward a month or two, my flatmate Logan and I entered the MYOB ‘try and think of a good idea we can steal later’ competition. Each team has five days to build something that could improve, work with, or build on something that MYOB already offers. We quickly settled on the idea of a slackbot that would help you timesheet by reminding you regularly to tell it what you’re doing - so that at the end of the day you have a reliable record of what you spent your time on that you can use to make an accurate timesheet.

Initially we were set on writing whatever we made in Swift (because of just how cool it is) but because it is a massive pain to get the correct nightly build to be able to use third party libraries, and installing it on Arch Linux is not trivial. We soon decided that we would take the more pragmatic approach and use Ruby, along with my realtime-slackbot gem (after making some changes to make it more usable by other people).

It’s important to understand that there is a significant difference between a Slack app and a Slack integration - apps are distributed through Slack’s marketplace and typically can be added in one or two clicks via an ‘Add to Slack’ button. Custon integrations are specific to a single team and are added by creating a new integration on the team config page, then using the token from there when starting the bot.

My previous bots had all been custom integrations - specifically tailored to my team and hosted on my Raspberry Pi at home. What Logan and I were setting out to do was make a proper Slack app that could be installed and used by anyone, in any team. This meant implementing the OAuth ‘flow’ to get a token that could be used in a certain team. The sequence of events goes something like:

  1. The user clicks the Add to Slack button on your website
  2. They select one of their teams to add the app to
  3. Slack sends a one-time code to your server
  4. You use this code to get a permenant auth token for the team
  5. You send this token to the RTM.start method of the API to get a websocket URL
  6. A new bot instance connects to this URL and starts interacting with the members of the team.

My gem was built to only handle the last two steps of this sequence. So we obviously had to implement a webserver that could handle the callback from slack, and host somewhere for the Add to Slack button to live. We ended up using Sinatra for this, as it is very well supported and can be used in a single file - which is great when you just want to serve two mostly static pages.

Once we could handle the web side of things, we had to actually create new bots when a new user added the app to their team. This is where the real ‘fun’ begins. We aimed to have the web server doing its own thing (managed by Rack) and have a separate process that would manage the bots and create new ones on demand from the web server.

There are many different ways that you could communicate between these two processes; you could have a queue that is polled by the bot manager every few minutes, stored on a file or database. A file is a bit janky and a database overkill. You could implement some UDP or TCP socket connection to communicate, probably a lot of work and prone to encoding/ decoding errors if you don’t do it well. Thankfully Logan found fairly quickly that Redis can act as a message-passing system - any number of processes can subscribe to a channel, and any message on that channel will be sent to all subscribers. Perfect.

This quickly made Redis one of my favorite new toys - it was so easy to persist (or at least kind of persist) data as well as co-ordinating multiple processes. Our web server would simply send a message to the bot manager with a new token, the bot manager would save this token in Redis for later and start a new bot. The bot would then act just like a custom integration, as all it needs is the token and it will work out the rest.

So, quick recap: the Sinatra server responds to the authentication endpoints for slack, and the bot server subscribes to a redis channel which lets it know when to connect a new bot. Each new bot is run in a new thread by the bot server.

While I think this is a fairly decent effort for a 5-day project, especially given that the actual bot that would remind about time sheeting hadn’t really been started. Nothing built this hastily is without bugs, unhandled edge cases, or any robustness that you would hope for a web service.


Elixir is a programming language that runs on the BEAM VM (the home of Erlang). Elixir is to BEAM what Kotlin or Scala is to the JVM - an alternative language that runs in the same environment and is interoperable with the main language for the VM. If you look a bit further into Elixir, it is actually mostly just a pile of macros that somehow create a useable language. Like Erlang, Elixir is a functional language with no mutatable data - every value is constant. The only way to change the state of the application is to run a separate process and use message passing to manipulate the state.

The ability to run many processes easily in parallel is what makes Elixir/ Erlang interesting. Each process is independent of all others, so if something breaks in one process nothing else is effected. By splitting an application into different processes (which is necessary anyway because everything is immutable) you can create a tree structure of processes. Each leaf can crash and be restarted by its parent, or the parent can choose to send the crash further up the chain by crashing itself. At some point in this process there is a supervisor that restarts the crashed processes, keeping the application alive.

Going back to my SENG slackbot, I wanted it to be able to remind everyone of the merge requests that they still had to review every day at a certain time. Initially I reworked my Ruby bot to post something to a given channel each day, however it turned out to be a bit buggy and would cause the bot to crash - mainly because of my lazy programming. However for something that I didn’t really want to worry about, it was a pain.

It is probably quite obvious where this is going. I decided to rewrite the bot in Elixir, using an existing Slack module. The Quantum library also simplified the posting at a certain time of day by adding a cron-like job scheduler that just runs in its own process in the background. The main advantage of using Elixir here is that by making a simple supervisor to start each process in the application, any part that crashes will be automatically restarted. There was at one point a bug where any message received by the bot that didn’t have a user ID (eg a deleted or edited message) would crash it. But of course this crash was inconsequential as the supervisor would just create a new process running the bot, and reconnect. I left this version running for about a week before getting round to fixing it as it wasn’t really a huge problem - unlike any problems with my Ruby bot that would be very unhappy about any errors.

Another bonus of Erlang and Elixir being so oriented around processes, is that the processes don’t have to be running on the same computer. Completely by magic, an application can be split up without having to re-write a whole load of code. Although this comes at a cost of writing code in the process-oriented style.

So I have a new favorite toy for writing server-side services. What really makes me enthusiastic about Elixir is that every part of the Ruby slackbot system that Logan and I made, could be implemented in a single Elixir application. The web application would no doubt use Phoenix, and pass off requests to create new bots to a bot manager process, which would create a new process for the bot. If we somehow managed to get an influx of users the bots could be split off onto a different server entirely. Redis would not be needed for communicating between the processes, and a stateful Elixir process could be used to store key/ value pairs, and easily persisted to a file using the built in Erlang serialization (which works really well because everything is just a combination of lists, tuples, and maps).

The most important thing that I’ve learnt from this is that while you can do almost anything in your language of choice (see: Java developers), the overhead of twisting it to fit the problem might outweigh the cost of learning a new language that is better suited. Either that or I’m too easily excited by new programming languages and a mediocre Ruby developer.

Why I Dislike ATDD

This was written as the final section to a university lab report on testing, ATDD, and mocking.

Both cucumber and concordion aim to make it easier to write more understandable tests at a higher level - instead of writing unit tests that test very specific and granular aspects of a class, the acceptance tests ensure that the feature behaves as expected for the end user.

At my internship over the summer, I worked on an open source project management system called Redmine, and some of its plugins. The Redmine Backlogs plugin adds agile functionality to Redmine, and has a massive suite of Cucumber tests that I had to maintain. After seeing the ‘bad side’ of computer evaluated acceptance tests and ATDD, I am very sceptical to the benefits of cucumber - and have major doubts in concordion.

The Backlogs tests consisted of about 20 feature files, each ranging from 1-2 scenarios, up to about 6. This could be about 200 lines of steps. The actual definitions were split into 3 files (given, when, and then steps - it was a Ruby project so it isn’t as strict as the Java implementation). These were about 1500 lines each.

Imagine the following scenario: you’re tasked with making the tests pass after some feature was added, or a change in the environment caused them to fail. Running the tests reveals which of the scenarios is failing, and you have a line in a feature file that is causing it to fail. Due to the fact that the actual definition of the step is defined by a regular expression, you can’t find it by simply searching for the line in the feature. Eventually you find it somehow - probably by doing a regex search for something similar to the step text.

Now that you’ve found the step definition, you can debug that step - or any of the steps above or below it in the scenario (which you have to find by repeating the same ordeal outlined before). You fix the scenario and any others that were affected by the change. You decide that it’s good practice to write a new test that tests the feature that was just added.

Here you have the reverse problem from debugging - you don’t know what steps have been defined to create the new test. Your IDE or editor likely doesn’t have any kind of autocomplete to help you fill out the steps in the scenario. Instead you add a expression to the step definition files that will be used in just your test - adding to the mass of bespoke step definitions already written.

This is obviously the worst case of cucumber or any ATDD framework. On the flip side, I created my own plugin for Redmine while I was working. When it came time to test it, we decided that cucumber would be easiest - the whole team understood it and it was already setup for one plugin, so the amount of work needed to get it working on another was minimal.

Working on another project from scratch, cucumber was very easy to use - I knew off the top of my head every valid step definition and the options that I could give it. When creating my own definitions, I could write them in such a way so that they could be reused and extended later to test different situations. Obviously this is the difference between knowing a codebase and being completely new to it, as well as the worst type of codebase - an unmaintained open source project - versus the easiest to understand - a small project by one developer, which is you.

Even knowing that I was working in the worst case, I am sceptical to the benefits of computer evaluated acceptance tests. Talking to Sam - a coworker over the summer, and all-round testing guru - he says that the idea of cucumber is flawed to begin with. It assumes that the client or PO will provide acceptance criteria detailed enough to test the feature sufficiently and specific enough to be turned into valid cucumber instructions. If I was working with a PO that did give this quality acceptance critera, I would jump to cucumber almost immediately

Concordion on the other hand, completely stumps me. I understand that having nicely formatted results that can be shown off to stakeholders could prove useful, however the overhead required to test using concordion seems to be through the roof for little or no gain. In a nutshell what concordion appears to do is take all the assertions out of a normal jUnit test and put them inline in HTML elements. Once again this disconnect between the actual code and the expected results would make it harder to maintain and debug tests. In my mind cucumber is better because the content of the feature files is just the description and expected result, whereas the concodion files mix the description and tests with the layout of the result.

It seems like the end result of concordion could be acheived by parsing jUnit tests with a known format of JavaDoc and assertion messages. These could be parsed as the tests were run and then generate an HTML file - much like a JavaDoc - with the test results, which then can be styled appropriately. In fact, this could probably be done with annotations and reflection, without the need to parse the test code manually.

So far my thoughts on ATDD is that developers should spend time doing what they are best at, with the tools that they work best with - nine times out of ten this is writing code in their preferred IDE, not writing english or HTML-jUnit hybrids that will be run as tests. Perhaps my view of ATDD is skewed because I first used cucumber in the worst possible way. If I do end up using ATDD as part of my group project, I hope it is well managed and used appropriately - maybe I will come around to this way of testing.

Why I use Nginx

There are two very important reasons why I use Nginx to run my website:

  1. It was the first thing I used
  2. It has smaller config files than Apache

Even though I have been using it for quite some time, I didn’t really understand it - until I setup a second static hosting domain to host a Jenkins theme, which made me realise it’s not too bad.

The css would only be applied if the http headers were correct (ie it had text/css rather than just text/plain). Files servered though GitLab’s ‘raw’ mode have a text/plain header.

So this is my nginx config file, in sections.

http {
  include /etc/nginx/mime.types;
  passenger_root # Path to the passenger gem;
  passenger_ruby # Path to the ruby shim, from rbenv;

All of my config is in the http section. I’d guess that I can have other sections for different protocols, but this is just a basic web server so all I need is HTTP.

The include mime types line will make nginx serve static files with the correct Content-Type header for the file extension, which is why serving from this works for my Jenkins server but GitLab doesn’t.

server {
  location / {
    root /var/www/blog;
  }
}

This section defines a default server - anything that doesn’t match will just be sent to this, for example foobar.javanut.net will just go to the main blog. I could add more things in here if I wanted a subsection to go to somewhere else - maybe I wanted to serve some other content at javanut.net/my_stuff. I could just make a new location block and set the root to be a different location on my server.

server {
  listen 80;
  server_name static.javanut.net;
  root /var/www/static;
}

This is basically the same as the previous section, it’s just another static file server that points to a different folder. The main point here is that the server_name has been set, so that it is only accessible on static.javanut.net. In the previous example, the location {} block is probably unnecessary as it isn’t needed here.

  server {
    listen 80;
    server_name my_rails_app.javanut.net;
    root /var/www/my_rails_app/current/public;
    passenger_enabled on;
  }
}

Again this is very similar, but this is for a Rails app using passenger. Passenger needs to be installed when nginx is compiled - there is no plugin system for nginx.

Enumerating The Ways I Love Swift Enumerations

As Casey illustrates, enums in Swift really are quite awesome and really powerful once you get used to them.

When I was working on WORM I kept wanting to make things enums. Every time I did try I wanted to attach values to each of the options. For example I created an @Stored annotation, which you could either tell it to work out the type for you, or to have a custom type string when it created the table. In Swift I could have done:

enum ColumnType {
  case Infer
  case Custom(type: String)
}

What is interesting is that I thought of the Swift solution before coming up with a Java version - my ‘native’ language.

Testing GitLab CI

During my internship this summer I found myself pining for a continuous integration server. The project I was working on had a massive set of Cucumber tests. The only problem was that they took 40 minutes to run completely, which is a bit too much of a pain to actually run them regularly on a local machine. Last semester for my software engineering group project, we were given Jenkins servers to run our tests on - this enforced the habit of keeping the tests up to date and fixing anything that breaks them.

I started looking around out of curiosity to see what else there was apart from Jenkins, which was not the most friendly thing to set up at the start of the project. After a little search I came across GitLab CI which integrates right into GitLab (obviously) and is written in Go which makes it quite cool right off the bat.

GitLab CI can be the simplest build server that you could imagine - it can be easily set to just run a shell script when a commit is pushed, and if the exit status is zero it succeeded, if it’s non-zero then it failed. This basically means that you don’t have to learn a new configuration syntax to do anything (you can, but it’s definitely not needed). If you can run your tests from the command line, you’re good to go.

Once it has been set up, every commit to the repo will trigger a build on your server and the result will be displayed in the ‘builds’ tab of GitLab and when you view the commit. This can be done with either GitLab.com or a different hosted instance of GitLab.

Full installation instructions for CI Runner are on GitLab’s website. However it’s as simple installing a package and running the setup (Instructions for Ubuntu, other distros on GitLab’s website)

# Add the source to apt-get:
curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-ci-multi-runner/script.deb.sh | sudo bash
# Install the runner package
sudo apt-get install gitlab-ci-multi-runner
# Run the setup
gitlab-ci-multi-runner register

The instructions say to run the last command with sudo, but when I did this my config file was set to be in /etc/gitlab-runner/config.toml rather than the expected ~/.gitlab-runner/config.toml.

The register command sets up a runner to point to a certain GitLab url (either GitLab.com or your custom instance) and the token needed to pull your code. I setup mine with:

URL: https://gitlab.com/ci
Token: ~~ secret token ~~ # Accessed in the main project settings
Description: Test runner
Executor: shell

I made a quick branch on one of my projects that has fair number of unit tests that are easily run. All I had to do was add a .gitlab-ci.yml file:

maven-package:
  script: "mvn package -B"

maven-package is the name of the build process, and the script key denotes either a single bash command or a list of commands. Once this was pushed to GitLab a build immediately started.

And failed instantly. Thankfully a full log gets output to the web interface and I could see that the runner was getting confused trying to load up a Docker instance, even though I didn’t configure that. So once I’d found the config file location (which wasn’t where I expected, as I mentioned before) and deleted all the entries apart from the main [[runners]] section (getting rid of the [[runners.docker]] section probably would have been enough). Once I’d made this change the build completed successfully.

Right now I’m very impressed with the ease of setting up a GitLab CI Runner and will definitely use one in the future (especially if I get a scooter computer) for the odd occasion that I write unit tests. However if I did set up a CI server I would want to make sure the gitlab-runner user had as few permissions as possible - probably only able to read or write within their own home directory - so that the chance of breaking my setup is reduced.

4K Video Editing on a 12" MacBook?

Of course the difference between Final Cut on one platform vs cross-platform Adobe Premiere is making the main difference, I think this really illustrates the advantage of software running on hardware that it’s expecting.

Welcome to Swift.org

Swift is now open source!

Finally I can start having a more serious look at making something with Taylor and deploying it onto something other than my laptop. At work this morning I downloaded the Swift binary and fired up the REPL. Fully functioning Swift on Ubuntu. The future is now.

Perhaps more interesting than the actual Swift repository is the Swift Evolution page that publicly shows the features and direction that both Apple and Swift community want the language to head in. It makes me very excited to see speed, portibility and API design among the goals for version 3 and beyond. This could mean more consistent APIs and a global Foundation library that wraps the native functions for each system (at the moment pre-processor commands are needed to use platform-specific libraries) which is not very Swift-y.

The first commit to the Swift project is dated July 18, 2010. It’s crazy to think that this was kept completely secret for four years before it was unveiled. Also pointed out in the comments is that Swift was named Swift since its inception.

Along with the dump of projects released this morning is the Swift Package Manager. I am probably far too excited about this that it is normal to be for a tool that I haven’t really looked at yet. However because of the pain that CocoaPods has caused me while trying to write unit tests that access a database, I’m happy to see a first party solution - and will be updating my version of SQLite.swift as soon as I can.