{ Will Richardson }

Me RSS

Basics of Functional Programming

As someone who enjoys learning new programming languages, it was only a matter of time before I came across functional programming languages, higher order functions, and the like. Earlier this year I found out that Java 8 now supports some functional programming and have been writing less boilerplate code ever since - much to the horror of my team mates. So this is for you, so you can hopefully understand my spaghetti of lambdas.

Functional programming is based around the idea of passing code around just like you would any other object. If you’re into design patterns, it’s like you’re using a very loose version of the Strategy pattern or the Template pattern. You provide a set of instructions that will be inserted into an existing algorithm or operation.

Most languages that support higher-order functions (functions that take code as a parameter) have three ‘bread and butter’ functions built-in: map, filter, and reduce. These simplify common list operations by abstracting away the boilerplate.

Map

Let’s say that I have a list of countries, and I want to present them to a user in a certain format. This is a faily common example where I have a list and I want to do an operation on each of its elements to produce a new list. You could say that there will be a mapping from each element in the first list to the element in the second list. In first year you are told to do something like this:

countries = # Some list of country objects
country_names = []
for country in countries
    country_names.push(country.name)
end
# Do something with the list of countries

However a far more succinct way of doing this is to map the list:

countries = # some list of countries
country_names = countries.map { |country| country.name }

Both methods are doing the same thing, but (for someone who understands functional programming) the second is much clearer and reduces the amount of noise in the code. Of course the disadvantage is that it can hide potentially costly operations.

An important note with map is that the operation should affect the object that you are mapping. For example if you map the countries to get all their names, but also reset some attribute of the country - you’re asking for problems in the future. If someone later decides that they only want to get the names of the first ten countries and you were relying on the fact that some other action is performed on all of them - problems are inbound.

Filter

Filter treats your function like a sieve - everything that it accepts is let through, the rest is ignored. So in this case your lambda is taking an item and returning true if you want that item to make it through the sieve. Filter reduces even more boilerplate:

let numbers = [1, 2, 5, 6, 9]
var even_numbers = [Int]()
for number in numbers {
  if number % 2 == 0 {
    even_numbers.append(number)
  }
}
// Do something with the even ones
let numbers = [1, 2, 5, 6, 9]
let even_numbers = numbers.filter { number in number % 2 == 0 }
// Do something with the even ones

You can of course chain filter statements together, or include a few conditions - basically like an SQL WHERE clause. Filter is especially useful when you have a list of objects, and you want to get rid of the ones that are null.

Reduce

When you have a list of items and want to distill it down to one object that represents some aspect of the whole list, reduce is what you’re looking for. The lambda takes two arguments - the reduced list so far, and the item that you want to reduce ‘into’ this reduced form. Reduce also takes an intial value, which is what the reduced form should start off as. A great example is summing a list of numbers - the initial reduced form is 0, and each time you want to add the current number to that.

numbers = [1, 2, 3, 6, 7]
sum = numbers.reduce(0, { |so_far, number| so_far + number })

Reduce is hard to explain - mainly because I don’t end up using it very often. Most languages include helpers for the common reduce operations: join, sum, and product are great examples. Each take a list and give you back a single value that is the combination of every item in the list.

If you think about it, both map and filter can be implemented using reduce - making reduce the only list operation you really need. So really map and filter are just helpers the common cases of reduce.

Let’s make a lambda!

So with all this knowledge, how do you go about using it? Well…

In Ruby any method that accepts a block (Ruby has lots of names for it’s anonymous functions) can be followed by a code block, either with do ... end or { ... }

In Swift closures are a type (defined by their arguments and the type they return) and like ruby can either by inside the argument list, or after the function call if the argument is at the end.

Java doesn’t really support lambdas. They are instead an anonymous implementation of an interface that has just one method. So a lambda that turns a country into a string of the country name is actually a an implementation of the generic interface Function<T, R>, (ie it’s type is Function<Country, String>) and it has a method R apply(T t) that takes in a value of type T and returns a result of type R. The code in the lambda provides the implementation of this method.

All of the list operations are hidden in the stream() method on lists, as well as the Steam.of() method that can create a stream from an Array. To turn your stream back into a list, you’ll want the .collect(Collectors.toList()) method. So the country to coutry name would look something like:

List<Country> countries = // Some list from somewhere
List<String> names = countries
    .stream()
    .map(country -> country.getName())
    .collect(Collectors.toList());

(Of course Java manages to still make a one line function into four)

Method references

If you functionally program enough, there will be some boilerplate - like creating a lambda that just calls one method on an object. So you can often just refer to that method, rather than writing out the whole lambda declaration:

(item) -> item.method()
// Can be replaced with
Item::method
{ |item| item.method() }
# Can be replaced with
&:method

If you want to learn more functional programming, Haskell, Clojure (Or Common LISP), and Elixir are all interesting.

Making Slackbots

This semester, for my group project I made a slackbot to select people for code reviews and generally be a nuisance in our slack group. I split it out into a gem which could be used to integrate easily with the slack real time messaging API. All it really does is provide a wrapper around the websocket connection and calls methods according to the type of the update received (typically the only update you care about is ‘message’ so there will only be one method). It may just be a wrapper, but it is my wrapper and I’m very pleased at how easy writing and maintaining the SENG group bot is.

Fast forward a month or two, my flatmate Logan and I entered the MYOB ‘try and think of a good idea we can steal later’ competition. Each team has five days to build something that could improve, work with, or build on something that MYOB already offers. We quickly settled on the idea of a slackbot that would help you timesheet by reminding you regularly to tell it what you’re doing - so that at the end of the day you have a reliable record of what you spent your time on that you can use to make an accurate timesheet.

Initially we were set on writing whatever we made in Swift (because of just how cool it is) but because it is a massive pain to get the correct nightly build to be able to use third party libraries, and installing it on Arch Linux is not trivial. We soon decided that we would take the more pragmatic approach and use Ruby, along with my realtime-slackbot gem (after making some changes to make it more usable by other people).

It’s important to understand that there is a significant difference between a Slack app and a Slack integration - apps are distributed through Slack’s marketplace and typically can be added in one or two clicks via an ‘Add to Slack’ button. Custon integrations are specific to a single team and are added by creating a new integration on the team config page, then using the token from there when starting the bot.

My previous bots had all been custom integrations - specifically tailored to my team and hosted on my Raspberry Pi at home. What Logan and I were setting out to do was make a proper Slack app that could be installed and used by anyone, in any team. This meant implementing the OAuth ‘flow’ to get a token that could be used in a certain team. The sequence of events goes something like:

  1. The user clicks the Add to Slack button on your website
  2. They select one of their teams to add the app to
  3. Slack sends a one-time code to your server
  4. You use this code to get a permenant auth token for the team
  5. You send this token to the RTM.start method of the API to get a websocket URL
  6. A new bot instance connects to this URL and starts interacting with the members of the team.

My gem was built to only handle the last two steps of this sequence. So we obviously had to implement a webserver that could handle the callback from slack, and host somewhere for the Add to Slack button to live. We ended up using Sinatra for this, as it is very well supported and can be used in a single file - which is great when you just want to serve two mostly static pages.

Once we could handle the web side of things, we had to actually create new bots when a new user added the app to their team. This is where the real ‘fun’ begins. We aimed to have the web server doing its own thing (managed by Rack) and have a separate process that would manage the bots and create new ones on demand from the web server.

There are many different ways that you could communicate between these two processes; you could have a queue that is polled by the bot manager every few minutes, stored on a file or database. A file is a bit janky and a database overkill. You could implement some UDP or TCP socket connection to communicate, probably a lot of work and prone to encoding/ decoding errors if you don’t do it well. Thankfully Logan found fairly quickly that Redis can act as a message-passing system - any number of processes can subscribe to a channel, and any message on that channel will be sent to all subscribers. Perfect.

This quickly made Redis one of my favorite new toys - it was so easy to persist (or at least kind of persist) data as well as co-ordinating multiple processes. Our web server would simply send a message to the bot manager with a new token, the bot manager would save this token in Redis for later and start a new bot. The bot would then act just like a custom integration, as all it needs is the token and it will work out the rest.

So, quick recap: the Sinatra server responds to the authentication endpoints for slack, and the bot server subscribes to a redis channel which lets it know when to connect a new bot. Each new bot is run in a new thread by the bot server.

While I think this is a fairly decent effort for a 5-day project, especially given that the actual bot that would remind about time sheeting hadn’t really been started. Nothing built this hastily is without bugs, unhandled edge cases, or any robustness that you would hope for a web service.


Elixir is a programming language that runs on the BEAM VM (the home of Erlang). Elixir is to BEAM what Kotlin or Scala is to the JVM - an alternative language that runs in the same environment and is interoperable with the main language for the VM. If you look a bit further into Elixir, it is actually mostly just a pile of macros that somehow create a useable language. Like Erlang, Elixir is a functional language with no mutatable data - every value is constant. The only way to change the state of the application is to run a separate process and use message passing to manipulate the state.

The ability to run many processes easily in parallel is what makes Elixir/ Erlang interesting. Each process is independent of all others, so if something breaks in one process nothing else is effected. By splitting an application into different processes (which is necessary anyway because everything is immutable) you can create a tree structure of processes. Each leaf can crash and be restarted by its parent, or the parent can choose to send the crash further up the chain by crashing itself. At some point in this process there is a supervisor that restarts the crashed processes, keeping the application alive.

Going back to my SENG slackbot, I wanted it to be able to remind everyone of the merge requests that they still had to review every day at a certain time. Initially I reworked my Ruby bot to post something to a given channel each day, however it turned out to be a bit buggy and would cause the bot to crash - mainly because of my lazy programming. However for something that I didn’t really want to worry about, it was a pain.

It is probably quite obvious where this is going. I decided to rewrite the bot in Elixir, using an existing Slack module. The Quantum library also simplified the posting at a certain time of day by adding a cron-like job scheduler that just runs in its own process in the background. The main advantage of using Elixir here is that by making a simple supervisor to start each process in the application, any part that crashes will be automatically restarted. There was at one point a bug where any message received by the bot that didn’t have a user ID (eg a deleted or edited message) would crash it. But of course this crash was inconsequential as the supervisor would just create a new process running the bot, and reconnect. I left this version running for about a week before getting round to fixing it as it wasn’t really a huge problem - unlike any problems with my Ruby bot that would be very unhappy about any errors.

Another bonus of Erlang and Elixir being so oriented around processes, is that the processes don’t have to be running on the same computer. Completely by magic, an application can be split up without having to re-write a whole load of code. Although this comes at a cost of writing code in the process-oriented style.

So I have a new favorite toy for writing server-side services. What really makes me enthusiastic about Elixir is that every part of the Ruby slackbot system that Logan and I made, could be implemented in a single Elixir application. The web application would no doubt use Phoenix, and pass off requests to create new bots to a bot manager process, which would create a new process for the bot. If we somehow managed to get an influx of users the bots could be split off onto a different server entirely. Redis would not be needed for communicating between the processes, and a stateful Elixir process could be used to store key/ value pairs, and easily persisted to a file using the built in Erlang serialization (which works really well because everything is just a combination of lists, tuples, and maps).

The most important thing that I’ve learnt from this is that while you can do almost anything in your language of choice (see: Java developers), the overhead of twisting it to fit the problem might outweigh the cost of learning a new language that is better suited. Either that or I’m too easily excited by new programming languages and a mediocre Ruby developer.

Why I Dislike ATDD

This was written as the final section to a university lab report on testing, ATDD, and mocking.

Both cucumber and concordion aim to make it easier to write more understandable tests at a higher level - instead of writing unit tests that test very specific and granular aspects of a class, the acceptance tests ensure that the feature behaves as expected for the end user.

At my internship over the summer, I worked on an open source project management system called Redmine, and some of its plugins. The Redmine Backlogs plugin adds agile functionality to Redmine, and has a massive suite of Cucumber tests that I had to maintain. After seeing the ‘bad side’ of computer evaluated acceptance tests and ATDD, I am very sceptical to the benefits of cucumber - and have major doubts in concordion.

The Backlogs tests consisted of about 20 feature files, each ranging from 1-2 scenarios, up to about 6. This could be about 200 lines of steps. The actual definitions were split into 3 files (given, when, and then steps - it was a Ruby project so it isn’t as strict as the Java implementation). These were about 1500 lines each.

Imagine the following scenario: you’re tasked with making the tests pass after some feature was added, or a change in the environment caused them to fail. Running the tests reveals which of the scenarios is failing, and you have a line in a feature file that is causing it to fail. Due to the fact that the actual definition of the step is defined by a regular expression, you can’t find it by simply searching for the line in the feature. Eventually you find it somehow - probably by doing a regex search for something similar to the step text.

Now that you’ve found the step definition, you can debug that step - or any of the steps above or below it in the scenario (which you have to find by repeating the same ordeal outlined before). You fix the scenario and any others that were affected by the change. You decide that it’s good practice to write a new test that tests the feature that was just added.

Here you have the reverse problem from debugging - you don’t know what steps have been defined to create the new test. Your IDE or editor likely doesn’t have any kind of autocomplete to help you fill out the steps in the scenario. Instead you add a expression to the step definition files that will be used in just your test - adding to the mass of bespoke step definitions already written.

This is obviously the worst case of cucumber or any ATDD framework. On the flip side, I created my own plugin for Redmine while I was working. When it came time to test it, we decided that cucumber would be easiest - the whole team understood it and it was already setup for one plugin, so the amount of work needed to get it working on another was minimal.

Working on another project from scratch, cucumber was very easy to use - I knew off the top of my head every valid step definition and the options that I could give it. When creating my own definitions, I could write them in such a way so that they could be reused and extended later to test different situations. Obviously this is the difference between knowing a codebase and being completely new to it, as well as the worst type of codebase - an unmaintained open source project - versus the easiest to understand - a small project by one developer, which is you.

Even knowing that I was working in the worst case, I am sceptical to the benefits of computer evaluated acceptance tests. Talking to Sam - a coworker over the summer, and all-round testing guru - he says that the idea of cucumber is flawed to begin with. It assumes that the client or PO will provide acceptance criteria detailed enough to test the feature sufficiently and specific enough to be turned into valid cucumber instructions. If I was working with a PO that did give this quality acceptance critera, I would jump to cucumber almost immediately

Concordion on the other hand, completely stumps me. I understand that having nicely formatted results that can be shown off to stakeholders could prove useful, however the overhead required to test using concordion seems to be through the roof for little or no gain. In a nutshell what concordion appears to do is take all the assertions out of a normal jUnit test and put them inline in HTML elements. Once again this disconnect between the actual code and the expected results would make it harder to maintain and debug tests. In my mind cucumber is better because the content of the feature files is just the description and expected result, whereas the concodion files mix the description and tests with the layout of the result.

It seems like the end result of concordion could be acheived by parsing jUnit tests with a known format of JavaDoc and assertion messages. These could be parsed as the tests were run and then generate an HTML file - much like a JavaDoc - with the test results, which then can be styled appropriately. In fact, this could probably be done with annotations and reflection, without the need to parse the test code manually.

So far my thoughts on ATDD is that developers should spend time doing what they are best at, with the tools that they work best with - nine times out of ten this is writing code in their preferred IDE, not writing english or HTML-jUnit hybrids that will be run as tests. Perhaps my view of ATDD is skewed because I first used cucumber in the worst possible way. If I do end up using ATDD as part of my group project, I hope it is well managed and used appropriately - maybe I will come around to this way of testing.

Why I use Nginx

There are two very important reasons why I use Nginx to run my website:

  1. It was the first thing I used
  2. It has smaller config files than Apache

Even though I have been using it for quite some time, I didn’t really understand it - until I setup a second static hosting domain to host a Jenkins theme, which made me realise it’s not too bad.

The css would only be applied if the http headers were correct (ie it had text/css rather than just text/plain). Files servered though GitLab’s ‘raw’ mode have a text/plain header.

So this is my nginx config file, in sections.

http {
  include /etc/nginx/mime.types;
  passenger_root # Path to the passenger gem;
  passenger_ruby # Path to the ruby shim, from rbenv;

All of my config is in the http section. I’d guess that I can have other sections for different protocols, but this is just a basic web server so all I need is HTTP.

The include mime types line will make nginx serve static files with the correct Content-Type header for the file extension, which is why serving from this works for my Jenkins server but GitLab doesn’t.

server {
  location / {
    root /var/www/blog;
  }
}

This section defines a default server - anything that doesn’t match will just be sent to this, for example foobar.javanut.net will just go to the main blog. I could add more things in here if I wanted a subsection to go to somewhere else - maybe I wanted to serve some other content at javanut.net/my_stuff. I could just make a new location block and set the root to be a different location on my server.

server {
  listen 80;
  server_name static.javanut.net;
  root /var/www/static;
}

This is basically the same as the previous section, it’s just another static file server that points to a different folder. The main point here is that the server_name has been set, so that it is only accessible on static.javanut.net. In the previous example, the location {} block is probably unnecessary as it isn’t needed here.

  server {
    listen 80;
    server_name my_rails_app.javanut.net;
    root /var/www/my_rails_app/current/public;
    passenger_enabled on;
  }
}

Again this is very similar, but this is for a Rails app using passenger. Passenger needs to be installed when nginx is compiled - there is no plugin system for nginx.

Enumerating The Ways I Love Swift Enumerations

As Casey illustrates, enums in Swift really are quite awesome and really powerful once you get used to them.

When I was working on WORM I kept wanting to make things enums. Every time I did try I wanted to attach values to each of the options. For example I created an @Stored annotation, which you could either tell it to work out the type for you, or to have a custom type string when it created the table. In Swift I could have done:

enum ColumnType {
  case Infer
  case Custom(type: String)
}

What is interesting is that I thought of the Swift solution before coming up with a Java version - my ‘native’ language.

Testing GitLab CI

During my internship this summer I found myself pining for a continuous integration server. The project I was working on had a massive set of Cucumber tests. The only problem was that they took 40 minutes to run completely, which is a bit too much of a pain to actually run them regularly on a local machine. Last semester for my software engineering group project, we were given Jenkins servers to run our tests on - this enforced the habit of keeping the tests up to date and fixing anything that breaks them.

I started looking around out of curiosity to see what else there was apart from Jenkins, which was not the most friendly thing to set up at the start of the project. After a little search I came across GitLab CI which integrates right into GitLab (obviously) and is written in Go which makes it quite cool right off the bat.

GitLab CI can be the simplest build server that you could imagine - it can be easily set to just run a shell script when a commit is pushed, and if the exit status is zero it succeeded, if it’s non-zero then it failed. This basically means that you don’t have to learn a new configuration syntax to do anything (you can, but it’s definitely not needed). If you can run your tests from the command line, you’re good to go.

Once it has been set up, every commit to the repo will trigger a build on your server and the result will be displayed in the ‘builds’ tab of GitLab and when you view the commit. This can be done with either GitLab.com or a different hosted instance of GitLab.

Full installation instructions for CI Runner are on GitLab’s website. However it’s as simple installing a package and running the setup (Instructions for Ubuntu, other distros on GitLab’s website)

# Add the source to apt-get:
curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-ci-multi-runner/script.deb.sh | sudo bash
# Install the runner package
sudo apt-get install gitlab-ci-multi-runner
# Run the setup
gitlab-ci-multi-runner register

The instructions say to run the last command with sudo, but when I did this my config file was set to be in /etc/gitlab-runner/config.toml rather than the expected ~/.gitlab-runner/config.toml.

The register command sets up a runner to point to a certain GitLab url (either GitLab.com or your custom instance) and the token needed to pull your code. I setup mine with:

URL: https://gitlab.com/ci
Token: ~~ secret token ~~ # Accessed in the main project settings
Description: Test runner
Executor: shell

I made a quick branch on one of my projects that has fair number of unit tests that are easily run. All I had to do was add a .gitlab-ci.yml file:

maven-package:
  script: "mvn package -B"

maven-package is the name of the build process, and the script key denotes either a single bash command or a list of commands. Once this was pushed to GitLab a build immediately started.

And failed instantly. Thankfully a full log gets output to the web interface and I could see that the runner was getting confused trying to load up a Docker instance, even though I didn’t configure that. So once I’d found the config file location (which wasn’t where I expected, as I mentioned before) and deleted all the entries apart from the main [[runners]] section (getting rid of the [[runners.docker]] section probably would have been enough). Once I’d made this change the build completed successfully.

Right now I’m very impressed with the ease of setting up a GitLab CI Runner and will definitely use one in the future (especially if I get a scooter computer) for the odd occasion that I write unit tests. However if I did set up a CI server I would want to make sure the gitlab-runner user had as few permissions as possible - probably only able to read or write within their own home directory - so that the chance of breaking my setup is reduced.

4K Video Editing on a 12" MacBook?

Of course the difference between Final Cut on one platform vs cross-platform Adobe Premiere is making the main difference, I think this really illustrates the advantage of software running on hardware that it’s expecting.

Welcome to Swift.org

Swift is now open source!

Finally I can start having a more serious look at making something with Taylor and deploying it onto something other than my laptop. At work this morning I downloaded the Swift binary and fired up the REPL. Fully functioning Swift on Ubuntu. The future is now.

Perhaps more interesting than the actual Swift repository is the Swift Evolution page that publicly shows the features and direction that both Apple and Swift community want the language to head in. It makes me very excited to see speed, portibility and API design among the goals for version 3 and beyond. This could mean more consistent APIs and a global Foundation library that wraps the native functions for each system (at the moment pre-processor commands are needed to use platform-specific libraries) which is not very Swift-y.

The first commit to the Swift project is dated July 18, 2010. It’s crazy to think that this was kept completely secret for four years before it was unveiled. Also pointed out in the comments is that Swift was named Swift since its inception.

Along with the dump of projects released this morning is the Swift Package Manager. I am probably far too excited about this that it is normal to be for a tool that I haven’t really looked at yet. However because of the pain that CocoaPods has caused me while trying to write unit tests that access a database, I’m happy to see a first party solution - and will be updating my version of SQLite.swift as soon as I can.

Life with Swift

Since Apple introduced Swift at WWDC last year, I’ve been interested in it as a compiled language that seems as easy and quick to develop as a dynamic scripting language like Python. Especially that Swift will (hopefully) be open sourced late this year, meaning that it could be used to develop applications that could be deployed easily onto a webserver as a simple binary (no Capistrano necessary).

Swift’s basic syntax is incredibly clean and easy to get your head around. Keywords take second place to syntactic symbols - extending a class is done with a colon: class Subclass: Superclass, Protocols {} rather than the more verbose Java syntax class Subclass extends Superclass implements Interfaces {}. I like this both because of the reduced typing but also how the colon is reused to set the type in all instances where a type is needed.

var str: String // variable initialisation
func things(number: Int) // argument definition

This is not the case for return values though. It would make sense that like a variable, a function should have a type attached to it. This is not the case, instead a one-off symbol is used: func getNumber() -> Int {}. This would be nicer and more consistent if it used the same style: func getNumber(): Int {}.

Swift’s optional types are very convenient and make code more explicit - being forced to unwrap values that could be nil makes writing code that deals with user input or stored values a whole lot cleaner. For example if you read a number from a text field and need to turn it into an int, Int(myString) returns an optional int, it may or may not be nil. You can then unwrap it:

if let number = Int(myString) {
    // Do something with the number
}

This is really handy, and extends to almost all parts of the language and the Cocoa API. This can be further enhanced by using optional chaining - adding the ? operator on to the optional value allows you to call methods on optional values as though they were definite values. The value returned by the last method is always an optional if you do this. For example if you have a dictionary of strings and you want to get one lowercased.

let lowercased = myDict[key]?.lowercaseString

Where this falls down is if the key is an optional value as well - you can’t index a dictionary with an optional value if the key isn’t optional. What I would like to do would be to use the question mark to maybe unwrap the key, and if it isn’t nil, then use the key to look up an item in the dictionary. Like this:

let value = myDict[key?]

But you can’t do that. The closest you can get is something like this:

if let k = key, let value = myDict[k] {
    // value is a definite value that is in the dictionary
} else {
    // either key is nil, or there is no value in the dictionary to match it
}

What makes Swift that bit cooler than other languages that I’ve dabbled in is that it has the standard functional programming functions - map, filter, and reduce - which makes working with arrays a whole lot less cumbersome for anyone with a bit of functional programming prowess. Paired with the powerful closure support, it’s easy to express an operation in terms of a few closures. To turn a list of strings into a list of all the ones that can be turned into ints you can just map and filter them:

let nums = myStringList.map({ str in
    Int(str)
}).filter({ possibleInt in
    possibleInt != nil
})

To sum these you can use the name-less closure syntax:

nums.reduce(0, { $0 + $1 })

None of this would be possible without Swift’s type system. When first looking at Swift I thought that it was simply statically typed like Java, except you didn’t have to explicitly declare the type of variables - they would be set for you if the compiler could work it out. However Swift can behave somewhat like Haskell’s types to create functions that don’t just work on on a string or a number, but any type that implements a certain protocol.

In Haskell you might come across something like:

isSmaller :: (Ord a) => a -> a -> Bool
isSmaller a b = a < b

Which uses the orderable (Ord) type class to declare a function that can be used on any type that supports ordering - strings, characters, numbers, etc. Swift has an expanse of built-in protocols that let you do similar things. For example, I wanted to be able to do set operations on lists while keeping the order of the elements, so I made an extension that would extend an array of elements that implemented the Hashable protocol - meaning that the contents of the array could be put into a set.

extension Array where Element: Hashable {
  func unique() -> [Element] {
    var seen: [Element:Bool] = [:]
    return self.filter({ seen.updateValue(true, forKey: $0) == nil })
  }

  func subtract(takeAway: [Element]) -> [Element] {
    let set = Set(takeAway)
    return self.filter({ !set.contains($0) })
  }

  func intersect(with: [Element]) -> [Element] {
    let set = Set(with)
    return self.filter({ set.contains($0) })
  }
}

These functions will be added to any array that contains elements that are hashable - if they aren’t, then I simply can’t use the functions. The more I get used to things like this, the more I like programming in Swift. It successfully combines the things I like in many different languages into one - it’s compiled, quick to write, allows for functional programming as well as rigid object-oriented structures and the ability to extend the language itself seamlessly.

Not Enough Magic

There were some minor murmurings this week after Apple released their new peripherals - The new (Magic) Trackpad, Mouse and Keyboard. (This was mostly drowned out by the far more significant news that the new iMac has a platter HDD). A very vocal portion of the internet were outraged that the Magic Mouse has a charging port on the bottom:

Magic Mouse with port on bottom

It’s easy to quickly dismiss this as a stupid decision and go about your day. But the mouse is supposed to charge enough for 8 hours of use in one minute or overnight for three months of use. This means that you will only ever see it awkwardly upside down charging less than 2% of the time - and when you’re using it the other 98% of the time you will be magical mouse that has no visible charging port. Had Apple opted to place the port on the front, the upper glass surface wouldn’t be able to dip down nearly as far as it does on the current and previous models. It also saves the mouse from having a little pointy nose/ mouth at the front - which would have made the previous model look far better in comparison.

Yesterday the batteries in my keyboard ran out, and so I had to stop using my computer and wait for them to charge overnight (Or bring out my classic wired keyboard, but that’s a bit too far). It would have been great to be able to simply plug in whatever device is running out of power for literally one minute and be able to continue working as normal. Having in-built batteries also means that remaining charge estimates can be far more accurate - my keyboard and trackpad have no idea whether they have 1.5V AAs or 1.2V rechargables, meaning that the drainage percentage is almost always off.

What the real concern should be with these new peripherals is what the lifetime of the batteries will be. Coupling the battery to the device means that as the battery degrades the device becomes more and more painful to use. You shouldn’t have to buy a new keyboard because your old one can’t keep charged any more. I own three wireless input devices, all of them over 6 years old. This isn’t a problem for this generation of devices because their batteries are replaceable and it’s just a matter of clicking a new set in.