{ Will Richardson }

Me Blog RSS JSON Feed

Needless complexity: Generalising a Scheme for Aikido Training

It is perhaps a little known fact that I have practiced Aikido for about the last 13 years now.

I’m bad at writing introductions, so let’s jump straight to the problem. When training with more than one other person, you have to have some way of deciding who attacks who - you can’t just alternate. Normally when practicing tachi waza the most senior student goes first and does the technique four times to the uke before the roles are swapped. So what to do when someone else joins your pair?

You could just make a directed triangle - the person A is attacked by B, the B by C, and C by A, before the cycle repeats. This is easy to describe and can easily be extended to any number of people, but person A will never be attacked by person C - they miss out on any feedback that person C may have for them. You want a method that will allow everyone to train with everyone else, as well as allowing each member to do the technique enough times to improve.

At the moment, this is the recommended way of training:

A - B
A - C
B - C
B - A
C - A
C - B

Now this is fine. Apart from the fact that it only applies to exactly three people. The programmer in me wants a method that applies to any number of people. How about:

func train(members: [Person]) {
  for nage in members.sorted(by: .rank) {
    for uke in members.sorted(by: .rank) where uke != nage {
      uke.attack(nage)
    }
  }
}

Basically starting from the highest ranked member, each member should have a turn as nage. They should be attacked by each other member, in the order of their rank. This is how training in a pair works, and works just the same way if the whole class is training together.

This gets slightly more confusing when you doing weapons practice - there is a less clear distinction between the uke and the nage; the uke is often not thrown by the nage, and the uke still has to learn the attack as it not just a single strike or grab.

It’s common with weapons practice for a pair to train with one role, then swap and train before moving on to the next member of the group. This reduces the distraction of changing partners, letting you focus on the technique. This can be generalised in a similar way - this time each member of the group in descending rank order is the ‘key’ member, who practices both sides of the technique with each other member, then the ‘key’ member is changed to the next member in rank.

func train(members: [Person]) {
  for key in members.sorted(by: .rank) {
    for other in members.sorted(by: .rank) where key != member {
      other.attack(key)
      key.attack(other)
    }  
  }
}

Basically I think too much about the efficiency of how I am training, rather than focussing on the training itself. I guess that’s what happens when you spend all day learning about Software Engineering and stuff.

More Fun With Generics in Kotlin

Android now supporting Kotlin means more people playing around with it. Ben Trengrove is one of them - he has made a quite neat way of representing units in a type-safe wrapper. This disallows doing operations on two units of different measurements - for example speed cannot be added to a distance. Adding helper extensions to numeric types allows you to use it like this:

val distance = 21.kilometers

// This is OK because they are both distances
println(distance + 5.miles)

// This fails because you can't add distance to time
println(distance + 9.minutes)

You can have a look at Ben’s code here. Currently Ben’s code allows you to multiply one quantity by another. The result is a quantity with the same unit of the operands passed to the multiplication. This doesn’t follow the rules of dimensional analysis - if a distance is multiplied by another, the result not a distance, it’s a 2D area.

What this means is that you can do code like this:

val width = 10.meters
val height = 5.meters

// Area has unit Distance.Meter, not meters squared
val area = width * height

This piqued my interest - how could you implement units like speed and area, that are composed of multiple units? Of course you could just remove the .div and .times methods and replace them with extension functions that return a Quantity<Speed> or Quantity<Area> for each combination of units that you’re interested in.

But surely we can do better? This is what I set out to do, I wanted to be able to define the base units and derive every other unit from them. If you want to skip the rambling, you can check out the end result here.


The premise of this approach is to make two new subclasses of Unit1 that each have two generic constraints - each of which must also be a type of Unit. These represent a division type and a multiplication type, QuotientUnit and ProductUnit. All units have a suffix attribute that stores the standard identifier for that type (like “m” for meters, “s” for seconds, etc). QuotientUnit and ProductUnit create their suffix from the suffixes of their parts with either “/” or “.” inbetween.

The basic units are still defined in a similar way: 2

abstract class Unit(val suffix: String, val ratio: Double) {
  internal fun convertToBaseUnit(amount: Double) = amount * ratio
  internal fun convertFromBaseUnit(amount: Double) = amount / ratio

  override fun toString() = suffix
}

open class Distance(suffix: String, ratio: Double): Unit(suffix, ratio) {
  companion object {
    val Mile = Distance("mi", 1.60934 * 1000.0)
    val Kilometer = Distance("km", 1000.0)
    val Meter = Distance("m", 1.0)
    val Centimeter = Distance("cm", 0.1)
    val Millimeter = Distance("mm", 0.01)
  }
}

The implementation of the composite units looks like this: 3

class QuotientUnit<A: Unit, B: Unit>(a: A, b: B):
    Unit("$a/$b", a.ratio / b.ratio)
class ProductUnit<A: Unit, B: Unit>(a: A, b: B):
    Unit("$a.$b", a.ratio * b.ratio)

They are really just a placeholder to keep the type system in check. We can then use these to extend our .div and .times methods to return quantities with composite types. So inside the Quantity class we add:

operator fun <R: Unit> div(quantity: Quantity<R>): Quantity<QuotientUnit<T, R>> {
  return Quantity(amount / quantity.amount, QuotientUnit(unit, quantity.unit))
} 
operator fun <R: Unit> times(quantity: Quantity<R>): Quantity<ProductUnit<T, R>> {
  return Quantity(amount * quantity.amount, ProductUnit(unit, quantity.unit))
} 

So now when we divide a distance in kilometers by a time in hours, we get a QuotientUnit<Distance, Time> with a suffix of “km/h”:

val distance = 21.kilometers
val time = 1.5.hours
println("Speed is: ${distance / time}") // Speed is: 14 km/h

And we should be able to do conversions between composite units as well, because the ratio of a composite unit is calculated based on the original units.

val speed = 21.kilometers / 1.5.hours
val milesPerHour = speed.to(QuotientUnit(Mile, Hour))
println("Speed is $milesPerHour") // Speed is 8.7 mi/h

Now that’s quite useful. However typing QuotientUnit(Mile, Hour) is not very elegant. Perhaps we can use some helper function to make this a bit more readable?

We can actually do better than a helper function, instead we can make an extension operator that defines / and * on pairs of units. This lets us spell a composite unit like this: Mile / Hour, which is the same as QuotientUnit(Mile, Hour). You can do this like so:

operator fun <A: Unit, B: Unit> A.div(other: B) = QuotientUnit(this, other)
operator fun <A: Unit, B: Unit> A.times(other: B) = ProductUnit(this, other)

With all this, we can now do the unit conversion problems you get in physics class with almost no effort:

// James Bond is running along the roof of a train
// It takes him 1 minute to run the length of a 20-metre carriage
// The train is moving at 60 miles per hour
// How fast is James moving relative to the ground, in km/h?
val jamesSpeed = 20.meters / 1.minute
val trainSpeed = 60.miles / 1.hour

val totalSpeed = jamesSpeed + trainSpeed

val metricSpeed = totalSpeed.to(Kilometer / Hour)

println("Speed relative to ground is: $metricSpeed")
// Speed relative to ground is: 97.76 km/h

Easy!

The last thing I wanted to clear up was the need to repeat code for all the helper properties (6.minutes, 9.kilometers etc). These have to be repeated for every type of unit, and I wanted a way of creating units without this repitition. In reality you’d probably keep these to make it easier on yourself, but it’s nice to have an alternative.

How about just a simple infix function that operates on a number and a unit? Or how about if you multiply a number by a unit, it creates a quantity with that unit? What about invoking the unit with brackets - like a function call - to create a quantity in that unit? These are all quite straightforward: 4

// 5 into Minute makes a Quantity(5, Minute)
infix fun <T: Unit> Number.into(unit: T) = Quantity(this.toDouble(), unit)
// 79 * Kilometer makes a Quantity(79, Kilometer)
operator fun <T: Unit> Number.times(unit: T) = this into unit
// Second * 3 makes a Quantity(3, Second)
operator fun <T: Unit> T.times(value: Number) = value into this
// Hour(12) makes a Quantity(12, Hour)
operator fun <T: Unit> T.invoke(value: Number) = value into this

All of these make it super easy to create quantities. And of course, you can use them with composite units: 5 * (Kilometer / Hour), Second(8), 9.into(Metre * Metre) each create speed, duration, and area quantities.

Using these we could solve our physics problem from above like so:

// James' target is running twice as fast as him along the train
// How fast is the target moving relative to the ground, in m/s?
val jamesSpeed = 20 * (Meter / Minute)
val targetSpeed = 2 * jamesSpeed
val trainSpeed = 60 * (Mile / Hour)

val totalSpeed = targetSpeed + trainSpeed

val metricSpeed = totalSpeed.into(Meter / Second)

println("Speed relative to ground is: $metricSpeed")
// Speed relative to ground is: 27.5 m/s

Another helper that we can add is a division operator that operates on two quantities of the same type, producing a ratio of the two values rather than another quantity. You can use this to see “how many items of length X fit in space Y?”. This is done like so, inside the Quantity class:

operator fun div(other: Quantity<T>) = unit.convertToBaseUnit(amount) / other.unit.convertToBaseUnit(other.amount)

// To work out the ratio between two speeds:

println("Speed ratio: ${jamesSpeed / targetSpeed}")
// Speed ratio: 0.5

For completeness I also added operators for multiplying values by numbers with no units - letting you do things like “double this distance” with 2 * distance. Quantities are also comparable, so less than and greater than also work.

Hopefully this explanation illuminates some of the magic generics in the code - which you can view here. I’m sure there are operations and helpers that I’m missing, or ways that this code can be cleaned up and simplified. This would make for a kick-ass back-end for a unit conversion app!

  1. When implementing this, my first idea that turned out to be a dud was to make every different measure a Kotlin object. This did mean that the types could be part of generic constraints, so QuotientUnit<Kilometer, Second> is a valid type. This seemed like a good idea initially, but quickly ended when I realised that QuotientUnit<Mile, Second> is a different type to QuotientUnit<Kilometer, Minute> - even though they both represent distance/time

  2. In my examples I omit the prefix of the base type (eg Distance.) for readability. You can import them directly which allows you to use units with no prefix (eg Metre) or just import the class and access the companion variables (eg Distance.Metre). 

  3. I’ve changed my Unit class to have a .toString method that simply returns the suffix, differing slightly from Ben’s original version. 

  4. I decided to rename .to to .into so that it didn’t clash with the built-in .to extension in Kotlin that turns two objects into a Pair 

Templates, Code Generation, and Macros

Macros are a really cool feature that is includes in a few cool languages (Clojure/ Common Lisp/ other Lisps, Elixir, Rust, and Crystal) that allow you to reduce boilerplate code, extend the capabilities of the language, and process data at compile time. There is no shortage of tutorials on macros, but I am going to approach them from a direction that may be more familiar to some people.

It doesn’t take long while programming to come across some kind of template. Whether it’s str.format() in Python, or the moustache-templated bindings in $JS_FRAMEWORK you’ll end up writing something that will be used to generate something else. We can use these to separate the content from how it will be displayed.

This snippet is some Embedded Ruby, the <% and %> denote a start and end of Ruby code. Outside the Ruby is just HTML.

<ul>
  <% for item in items %>
    <li><%= item %></li>
  <% end %>
</ul>

Here we have items which is a list of strings, and we iterate over it creating an <li> with the content of each item. The server will run this code and generate some HTML, which is then sent to the browser. The browser doesn’t know that the HTML was part of a template, it looks exactly the same as hand-written HTML. What we’re doing is using Ruby to generate more code that could have been written by hand. Doing this for a formatting language makes sense - there is no way for HTML to show dynamic content. However because we’re just dealing with text, we can use a template to generate any type of file - even code.

function <%= func_name %>() {
  return "This is <%= func_name %>";
}

This is a silly example, but we’re using a template to create a Javascript function. All it does is returns “This is “ followed my the function name, but if we were making a library that interacted with a database, and we wanted to have an API that was something like <table_name>.get_by_<column_name>() without having to do metaprogramming (for example if the language doesn’t support metaprogramming, or you don’t want the runtime cost of doing reflection).

A common example of something that uses a “template” to generate code is a parser generator (like Cup, Yacc). These read a file that is in their own syntax for defining a grammar and how to store the AST, and produce a file of code in the target language that will parse something according to the specification.

Casey Liss recently wrote about Sourcery, a library for automaticaly generating boilerplate Swift code - it can do things like make a type Equatable by generating an == method that compares every property of the type. This is possible by generating a .swift file with the code needed to define this method.

Sourcery is quite cool - but it means that you have to have a special Sourcery file that defines what it should do, and remember to run the Sourcery script to generate the new files. Ideally you would put the definition for what boilerplate to generate inside the Swift file with all the rest of the code, and the Swift compiler would generate it automatically before running the program.

This is essentially what macros are. They are pieces of code that make more code, and are run when the program is compiled. For example if Swift supported this, it might look something like:

func ==(other: this.class.name) {
  for attr in this.storedVariables {
    quote {
      guard self.{{ attr }} == other.{{ attr }} else { return false }
    }
  }
  quote {
    return true
  }
}

I’m imagining that quote will turn whatever is inside it into code that will be generated (like other languages), and the double curly braces escape a variable - {{ attr }} would be expanded to the name of the attribute.

For macros to be super effective, the language should be represented in its own data structures (languages that are like this are called homoiconic). Clojure is one of these, and provides syntax for denoting which code is to be evaluated, and which code is to be used to generate more code. ` or ' start a “this is for code generation block” - everything after the that is like all the HTML outside the <% %> tags in Ruby. Code after ~ is equivalent to code inside the <% %> tags.

So we could make a macro that prints the code it will evaluate before it runs (like the -x option in Bash):

(defmacro debug [code]
  `(do
    (println '~code)
    ~code))

This can then be used just like a normal function call, but instead of calling the function at runtime, it gets replaced when the code is compiled.

(debug (println (+ 8 6 (* 5 7))))
; Will be replaced with
(do
  (println '(println (+ 8 6 (* 5 7))))
  (+ 8 6 (* 5 7)))

The new code will first print "(+ 8 6 (* 5 7))" then run the maths.

That’s a silly example, but a far more practical example is the Ecto library for Elixir. It is a DSL for running SQL queries, by using a macro it basically adds an SQL-like language right into Elixir, which can be checked for validity at compile time, rather than putting SQL in string literals where errors are only known when the code is run.

Running code at compile time also lets you do some cool tricks that don’t involve creating “new” syntax. For example, resource files can be loaded right into the program, so nothing has to be read from disk when the application is running. Phoenix (an Elixir web framework) loads all the views when the code compiles and turns it into a function that concatenates strings - so no parsing has to be done at runtime.

Of course, many smart compilers let you use lambdas and stuff to create “new syntax” that gets expanded at compile time, but macros allow the developer to have more control over what happens when code is compiled and truly add new contructs to the language.

Conditionals in SH

I’ve been spending more time than I would like writing shell scripts recently, as I spend more time configuring my setup than I do on ‘real’ projects. What I’ve found interesting is how simple the core of a shell is, and the tricks some commands do to build on this.

Most *nix users have probably had a moment were they were writing a shell script and forgotten the syntax for an if statement. I write shell scripts so infrequently I often have to look it up. However all the if statement does is run the condition command and check the exit status, if it is 0 it will run the main block, anything else and it runs the else block.

“But what about the square brackets?” I would think to myself. Well that’s just a command. You know, the [ command. man [ reveals that this is just a standard command with some flags to tell it what kind of thing to check.

Let’s take a simple conditional that checks that two numbers are equal:

if [ $num1 -eq $num2 ]; then
  echo "Equal!"
fi

If num1 is 4 and num2 is 5, the [ command will receive "4", "-eq", "5", and "]" (remember everything is a string in the shell). The command takes the arguments up to the closing square bracket and does the comparison, in this case -eq means integer comparison. As far as I can tell the closing bracket is just for readability - if you have a condition with with logical operators (|| or &&) then each part of the expression can be in separate brackets (or you can use the -o and -a options to keep them in the same set of brackets).

So this means that we can do things like this:

[ -f some/file/path ] && cool_function_on_file some/file/path

Making use of the && builtin, rather than writing a whole if;then;fi block. Or when we remember that the condition can be any command, we can be a bit smarter in scripts:

if git clone "$full_remote$user/$project.git" "$_path"; then
  echo_cd $_path
fi

This will only change to the cloned repos directory if it cloned successfully (indicated by the result of the git command).

for loops work in a similar way, except instead of the condition we have a command that produces an output with each element separated by the $IFS variable. The $IFS is basically just whitespace/ newlines so we can capture the output of ls and iterate through each filename:

for filename in $(ls); do
  echo "It's a thing: $filename"
done

So in short, I have learnt a bit about shell scripting and now think it’s kind of neat rather than getting frustrated at the seemingly nonsensicle syntax.

For reference [[ and == are builtins to BASH and other newer shells. == is no different to = (but it can’t be used to accidentally assign something). [[ works the same as [ apart form the fact that it can be used with < and > for comparison, as it can process them before they are interpreted as IO redirection as part of a normal command.

Tested: Apple Won't Make a Touch MacBook

Norm and Jeremy on the Tested.com podcast have frequently complained that Apple doesn’t make laptop with touchscreens. This past week they stated that it was almost an inevitability that there would be a touchscreen MacBook some time in the future. I think this is unlikely and most definitely not something that will be released anytime soon.

Before I go any further, just a note: I have not owned a laptop with a touchscreen (I do use a Pixel C like a laptop frequently, though). But when has having no experience in a subject stopped anyone from voicing their opinion on the internet?

Why am I so adamant that there will be no touch MacBooks? The answer is simple: macOS. MacOS/ OS X is not designed to accept imprecise inputs from a touchscreen - the touch targets are far too compact. The size of the window chrome on macOS has typically been smaller than the size of Window’s windows.

This was mentioned in an interesting post by the Chrome design team, which ran through the process of redesigning the Chrome UI across all platforms. The height of the chrome was significantly larger on Windows.

Most of the time I have spent using touchscreen laptops has been debugging group project code on team mates computers. This meant using IntelliJ - which has its fair share of menus and toolbar buttons - all designed for use with a mouse. Naturally because of the novelty of having a touchscreen (or the mediocre quality of the trackpads) I used it instead of the trackpad.

IntelliJ is basically unusable on the touchscreen, the menus and buttons are too small to hit. Navigating nested menus is not at all pleasant. Anyone that has used a Mac knows that most normal applications have all their actions in the menu, and common actions can be placed in the toolbar of the application. The minimum recommended size for a touch target on iOS is 44 by 44 points whereas the recommended size of toolbar items on macOS is “at least 19x19 points” the actual clickable area is slightly larger than this at 36 by 24 points. Menus are a similar story - they are only 30 points high.

For a touchscreen Mac to be a good user experience, macOS’s entire UI would have to be redesigned. This would mean a massive amount of work for third-party developers (maybe not so much for those that just use entirely system controls) and probably leave a sad collection of apps that look out of place in the new OS.

Of course Apple has not shied away from making massive changes that require significant work to support by developers (switching to Intel, introducing retina displays, Yosemite redesign, etc). However given fairly small changes to the Mac lineup, any major change seems unlikely.

This is coming from someone that uses the terminal to find files more often than Finder, and uses their Mac mostly for development. So just perhaps my useage is not quite the norm. Although almost everything that I do that is not development is done on my Pixel C.

I think Apple’s answer to people that want a touchscreen laptop is the iPad Pro. And no, they will not merge macOS and iOS.

Bluetooth is Great, Until it's Not

Did you hear that Apple removed the headphone jack from the latest iPhone? Oh you did, well that’s a relief. What do you think? Have you sworn to never use an Apple device ever again? Well I thought I would care - but I don’t any more.

Let’s rewind a bit. Most of my time listening to audio on my phone since 2013 has been podcasts. All the great shows. I had been using a beat-up pair of Apple Earpods, as they fit my ears better than any other in-ear headphones. However having to wiggle the cable every few minutes gets old fast, and I was on the lookout for a replacement.

I ended up looking at the Urbanears Plattan ADV, and the Marley Positive Vibration headphones. Both are fairly reasonably priced and look good. When I went to buy them, I found that because of a sale the Marley Rebel BT headphones were the price that I was expecting to pay for the OTHER MARLEY phones.

So I am now the proud owner of some bluetooth headphones, and the lack of a cable is liberating. I am no longer concerned about how my phone sits in my pocket, or if I leave it on my desk when I jump up to get something, or how the cable will tangle with the strap on my bag. I am a satisfied customer.

Of course it’s not all good. Bluetooth pairing is a scary business. Bluetooth devices don’t like sharing. Connecting headphones that are paired to my phone connect to my tablet is a recipe for disaster. If I did, then they would start auto-pairing to my tablet when I turned them on. Then I would have to venture into the settings each time I used my headphones. Having a cable for does make this easier - if I’m using my tablet or laptop then it’s unlikely that the cable will get in the way, so limiting the bluetooth to my phone is not a big deal.

What would be ideal would be a pair of headphones that could accept a few different inputs and either combine them all or select the most recent one - so you could be listening to a podcast on your phone, then play a video on your tablet, and the phone would be told to pause while the video plays. That would be ideal, although it would mean that the headphones would either have to be in constant pairing mode to connect to new devices as they come in range, or require some button-press to look for a new device.

Of course, all good things must come to a saddening end - the more technology you add to something, the more ways it can break. So when I turned my headphones back on in preparation for the skate back home, instead of making the comforting “boo-doop” to indicate they are on and paired, they went “beeeeeeeeeeeeeeeeeeeeeeeep booooooooooooooop zero zero zero zero zero zero zero zero zero zero zero zero zero zero zero zero zero zero…” (yes, they literally had a computer voice saying “zero” over and over. Plugged in they worked fine. But any sign of bluetooth working was gone. Back to the shop to claim that return policy!

So now I have a new pair and right now they are working fine (I’m listening to some Mutemath as I write this - playing via my phone while I write on my tablet). But they weren’t without their own issue - for a few days they decided that they would only connect to my phone if I manually told my phone to connect to them. They then completely out of the blue (heh) started to connect automatically. Great for me, but damn weird.

Overall I am happy with the fact that my “daily driver” headphones are wireless - I will still use my Sennheiser over-ears if I’m working on my laptop at home. The ability to leave my phone sitting on a table, to have it facing whichever way in my pocket, and to get rid of the possibility of the cable catching on things makes wearing headphones more seamless in almost all situations. However while bluetooth is great when it’s working, it has plenty of opportunities to stop working and become less convenient than just plugging in a cable. And of course this means another device that needs charging - meaning I now have six devices that require charging regularly.

Basics of Functional Programming

As someone who enjoys learning new programming languages, it was only a matter of time before I came across functional programming languages, higher order functions, and the like. Earlier this year I found out that Java 8 now supports some functional programming and have been writing less boilerplate code ever since - much to the horror of my team mates. So this is for you, so you can hopefully understand my spaghetti of lambdas.

Functional programming is based around the idea of passing code around just like you would any other object. If you’re into design patterns, it’s like you’re using a very loose version of the Strategy pattern or the Template pattern. You provide a set of instructions that will be inserted into an existing algorithm or operation.

Most languages that support higher-order functions (functions that take code as a parameter) have three ‘bread and butter’ functions built-in: map, filter, and reduce. These simplify common list operations by abstracting away the boilerplate.

Map

Let’s say that I have a list of countries, and I want to present them to a user in a certain format. This is a faily common example where I have a list and I want to do an operation on each of its elements to produce a new list. You could say that there will be a mapping from each element in the first list to the element in the second list. In first year you are told to do something like this:

countries = # Some list of country objects
country_names = []
for country in countries
    country_names.push(country.name)
end
# Do something with the list of countries

However a far more succinct way of doing this is to map the list:

countries = # some list of countries
country_names = countries.map { |country| country.name }

Both methods are doing the same thing, but (for someone who understands functional programming) the second is much clearer and reduces the amount of noise in the code. Of course the disadvantage is that it can hide potentially costly operations.

An important note with map is that the operation should affect the object that you are mapping. For example if you map the countries to get all their names, but also reset some attribute of the country - you’re asking for problems in the future. If someone later decides that they only want to get the names of the first ten countries and you were relying on the fact that some other action is performed on all of them - problems are inbound.

Filter

Filter treats your function like a sieve - everything that it accepts is let through, the rest is ignored. So in this case your lambda is taking an item and returning true if you want that item to make it through the sieve. Filter reduces even more boilerplate:

let numbers = [1, 2, 5, 6, 9]
var even_numbers = [Int]()
for number in numbers {
  if number % 2 == 0 {
    even_numbers.append(number)
  }
}
// Do something with the even ones
let numbers = [1, 2, 5, 6, 9]
let even_numbers = numbers.filter { number in number % 2 == 0 }
// Do something with the even ones

You can of course chain filter statements together, or include a few conditions - basically like an SQL WHERE clause. Filter is especially useful when you have a list of objects, and you want to get rid of the ones that are null.

Reduce

When you have a list of items and want to distill it down to one object that represents some aspect of the whole list, reduce is what you’re looking for. The lambda takes two arguments - the reduced list so far, and the item that you want to reduce ‘into’ this reduced form. Reduce also takes an intial value, which is what the reduced form should start off as. A great example is summing a list of numbers - the initial reduced form is 0, and each time you want to add the current number to that.

numbers = [1, 2, 3, 6, 7]
sum = numbers.reduce(0, { |so_far, number| so_far + number })

Reduce is hard to explain - mainly because I don’t end up using it very often. Most languages include helpers for the common reduce operations: join, sum, and product are great examples. Each take a list and give you back a single value that is the combination of every item in the list.

If you think about it, both map and filter can be implemented using reduce - making reduce the only list operation you really need. So really map and filter are just helpers the common cases of reduce.

Let’s make a lambda!

So with all this knowledge, how do you go about using it? Well…

In Ruby any method that accepts a block (Ruby has lots of names for it’s anonymous functions) can be followed by a code block, either with do ... end or { ... }

In Swift closures are a type (defined by their arguments and the type they return) and like ruby can either by inside the argument list, or after the function call if the argument is at the end.

Java doesn’t really support lambdas. They are instead an anonymous implementation of an interface that has just one method. So a lambda that turns a country into a string of the country name is actually a an implementation of the generic interface Function<T, R>, (ie it’s type is Function<Country, String>) and it has a method R apply(T t) that takes in a value of type T and returns a result of type R. The code in the lambda provides the implementation of this method.

All of the list operations are hidden in the stream() method on lists, as well as the Steam.of() method that can create a stream from an Array. To turn your stream back into a list, you’ll want the .collect(Collectors.toList()) method. So the country to coutry name would look something like:

List<Country> countries = // Some list from somewhere
List<String> names = countries
    .stream()
    .map(country -> country.getName())
    .collect(Collectors.toList());

(Of course Java manages to still make a one line function into four)

Method references

If you functionally program enough, there will be some boilerplate - like creating a lambda that just calls one method on an object. So you can often just refer to that method, rather than writing out the whole lambda declaration:

(item) -> item.method()
// Can be replaced with
Item::method
{ |item| item.method() }
# Can be replaced with
&:method

If you want to learn more functional programming, Haskell, Clojure (Or Common LISP), and Elixir are all interesting.

Making Slackbots

This semester, for my group project I made a slackbot to select people for code reviews and generally be a nuisance in our slack group. I split it out into a gem which could be used to integrate easily with the slack real time messaging API. All it really does is provide a wrapper around the websocket connection and calls methods according to the type of the update received (typically the only update you care about is ‘message’ so there will only be one method). It may just be a wrapper, but it is my wrapper and I’m very pleased at how easy writing and maintaining the SENG group bot is.

Fast forward a month or two, my flatmate Logan and I entered the MYOB ‘try and think of a good idea we can steal later’ competition. Each team has five days to build something that could improve, work with, or build on something that MYOB already offers. We quickly settled on the idea of a slackbot that would help you timesheet by reminding you regularly to tell it what you’re doing - so that at the end of the day you have a reliable record of what you spent your time on that you can use to make an accurate timesheet.

Initially we were set on writing whatever we made in Swift (because of just how cool it is) but because it is a massive pain to get the correct nightly build to be able to use third party libraries, and installing it on Arch Linux is not trivial. We soon decided that we would take the more pragmatic approach and use Ruby, along with my realtime-slackbot gem (after making some changes to make it more usable by other people).

It’s important to understand that there is a significant difference between a Slack app and a Slack integration - apps are distributed through Slack’s marketplace and typically can be added in one or two clicks via an ‘Add to Slack’ button. Custon integrations are specific to a single team and are added by creating a new integration on the team config page, then using the token from there when starting the bot.

My previous bots had all been custom integrations - specifically tailored to my team and hosted on my Raspberry Pi at home. What Logan and I were setting out to do was make a proper Slack app that could be installed and used by anyone, in any team. This meant implementing the OAuth ‘flow’ to get a token that could be used in a certain team. The sequence of events goes something like:

  1. The user clicks the Add to Slack button on your website
  2. They select one of their teams to add the app to
  3. Slack sends a one-time code to your server
  4. You use this code to get a permenant auth token for the team
  5. You send this token to the RTM.start method of the API to get a websocket URL
  6. A new bot instance connects to this URL and starts interacting with the members of the team.

My gem was built to only handle the last two steps of this sequence. So we obviously had to implement a webserver that could handle the callback from slack, and host somewhere for the Add to Slack button to live. We ended up using Sinatra for this, as it is very well supported and can be used in a single file - which is great when you just want to serve two mostly static pages.

Once we could handle the web side of things, we had to actually create new bots when a new user added the app to their team. This is where the real ‘fun’ begins. We aimed to have the web server doing its own thing (managed by Rack) and have a separate process that would manage the bots and create new ones on demand from the web server.

There are many different ways that you could communicate between these two processes; you could have a queue that is polled by the bot manager every few minutes, stored on a file or database. A file is a bit janky and a database overkill. You could implement some UDP or TCP socket connection to communicate, probably a lot of work and prone to encoding/ decoding errors if you don’t do it well. Thankfully Logan found fairly quickly that Redis can act as a message-passing system - any number of processes can subscribe to a channel, and any message on that channel will be sent to all subscribers. Perfect.

This quickly made Redis one of my favorite new toys - it was so easy to persist (or at least kind of persist) data as well as co-ordinating multiple processes. Our web server would simply send a message to the bot manager with a new token, the bot manager would save this token in Redis for later and start a new bot. The bot would then act just like a custom integration, as all it needs is the token and it will work out the rest.

So, quick recap: the Sinatra server responds to the authentication endpoints for slack, and the bot server subscribes to a redis channel which lets it know when to connect a new bot. Each new bot is run in a new thread by the bot server.

While I think this is a fairly decent effort for a 5-day project, especially given that the actual bot that would remind about time sheeting hadn’t really been started. Nothing built this hastily is without bugs, unhandled edge cases, or any robustness that you would hope for a web service.


Elixir is a programming language that runs on the BEAM VM (the home of Erlang). Elixir is to BEAM what Kotlin or Scala is to the JVM - an alternative language that runs in the same environment and is interoperable with the main language for the VM. If you look a bit further into Elixir, it is actually mostly just a pile of macros that somehow create a useable language. Like Erlang, Elixir is a functional language with no mutatable data - every value is constant. The only way to change the state of the application is to run a separate process and use message passing to manipulate the state.

The ability to run many processes easily in parallel is what makes Elixir/ Erlang interesting. Each process is independent of all others, so if something breaks in one process nothing else is effected. By splitting an application into different processes (which is necessary anyway because everything is immutable) you can create a tree structure of processes. Each leaf can crash and be restarted by its parent, or the parent can choose to send the crash further up the chain by crashing itself. At some point in this process there is a supervisor that restarts the crashed processes, keeping the application alive.

Going back to my SENG slackbot, I wanted it to be able to remind everyone of the merge requests that they still had to review every day at a certain time. Initially I reworked my Ruby bot to post something to a given channel each day, however it turned out to be a bit buggy and would cause the bot to crash - mainly because of my lazy programming. However for something that I didn’t really want to worry about, it was a pain.

It is probably quite obvious where this is going. I decided to rewrite the bot in Elixir, using an existing Slack module. The Quantum library also simplified the posting at a certain time of day by adding a cron-like job scheduler that just runs in its own process in the background. The main advantage of using Elixir here is that by making a simple supervisor to start each process in the application, any part that crashes will be automatically restarted. There was at one point a bug where any message received by the bot that didn’t have a user ID (eg a deleted or edited message) would crash it. But of course this crash was inconsequential as the supervisor would just create a new process running the bot, and reconnect. I left this version running for about a week before getting round to fixing it as it wasn’t really a huge problem - unlike any problems with my Ruby bot that would be very unhappy about any errors.

Another bonus of Erlang and Elixir being so oriented around processes, is that the processes don’t have to be running on the same computer. Completely by magic, an application can be split up without having to re-write a whole load of code. Although this comes at a cost of writing code in the process-oriented style.

So I have a new favorite toy for writing server-side services. What really makes me enthusiastic about Elixir is that every part of the Ruby slackbot system that Logan and I made, could be implemented in a single Elixir application. The web application would no doubt use Phoenix, and pass off requests to create new bots to a bot manager process, which would create a new process for the bot. If we somehow managed to get an influx of users the bots could be split off onto a different server entirely. Redis would not be needed for communicating between the processes, and a stateful Elixir process could be used to store key/ value pairs, and easily persisted to a file using the built in Erlang serialization (which works really well because everything is just a combination of lists, tuples, and maps).

The most important thing that I’ve learnt from this is that while you can do almost anything in your language of choice (see: Java developers), the overhead of twisting it to fit the problem might outweigh the cost of learning a new language that is better suited. Either that or I’m too easily excited by new programming languages and a mediocre Ruby developer.

Why I Dislike ATDD

This was written as the final section to a university lab report on testing, ATDD, and mocking.

Both cucumber and concordion aim to make it easier to write more understandable tests at a higher level - instead of writing unit tests that test very specific and granular aspects of a class, the acceptance tests ensure that the feature behaves as expected for the end user.

At my internship over the summer, I worked on an open source project management system called Redmine, and some of its plugins. The Redmine Backlogs plugin adds agile functionality to Redmine, and has a massive suite of Cucumber tests that I had to maintain. After seeing the ‘bad side’ of computer evaluated acceptance tests and ATDD, I am very sceptical to the benefits of cucumber - and have major doubts in concordion.

The Backlogs tests consisted of about 20 feature files, each ranging from 1-2 scenarios, up to about 6. This could be about 200 lines of steps. The actual definitions were split into 3 files (given, when, and then steps - it was a Ruby project so it isn’t as strict as the Java implementation). These were about 1500 lines each.

Imagine the following scenario: you’re tasked with making the tests pass after some feature was added, or a change in the environment caused them to fail. Running the tests reveals which of the scenarios is failing, and you have a line in a feature file that is causing it to fail. Due to the fact that the actual definition of the step is defined by a regular expression, you can’t find it by simply searching for the line in the feature. Eventually you find it somehow - probably by doing a regex search for something similar to the step text.

Now that you’ve found the step definition, you can debug that step - or any of the steps above or below it in the scenario (which you have to find by repeating the same ordeal outlined before). You fix the scenario and any others that were affected by the change. You decide that it’s good practice to write a new test that tests the feature that was just added.

Here you have the reverse problem from debugging - you don’t know what steps have been defined to create the new test. Your IDE or editor likely doesn’t have any kind of autocomplete to help you fill out the steps in the scenario. Instead you add a expression to the step definition files that will be used in just your test - adding to the mass of bespoke step definitions already written.

This is obviously the worst case of cucumber or any ATDD framework. On the flip side, I created my own plugin for Redmine while I was working. When it came time to test it, we decided that cucumber would be easiest - the whole team understood it and it was already setup for one plugin, so the amount of work needed to get it working on another was minimal.

Working on another project from scratch, cucumber was very easy to use - I knew off the top of my head every valid step definition and the options that I could give it. When creating my own definitions, I could write them in such a way so that they could be reused and extended later to test different situations. Obviously this is the difference between knowing a codebase and being completely new to it, as well as the worst type of codebase - an unmaintained open source project - versus the easiest to understand - a small project by one developer, which is you.

Even knowing that I was working in the worst case, I am sceptical to the benefits of computer evaluated acceptance tests. Talking to Sam - a coworker over the summer, and all-round testing guru - he says that the idea of cucumber is flawed to begin with. It assumes that the client or PO will provide acceptance criteria detailed enough to test the feature sufficiently and specific enough to be turned into valid cucumber instructions. If I was working with a PO that did give this quality acceptance critera, I would jump to cucumber almost immediately

Concordion on the other hand, completely stumps me. I understand that having nicely formatted results that can be shown off to stakeholders could prove useful, however the overhead required to test using concordion seems to be through the roof for little or no gain. In a nutshell what concordion appears to do is take all the assertions out of a normal jUnit test and put them inline in HTML elements. Once again this disconnect between the actual code and the expected results would make it harder to maintain and debug tests. In my mind cucumber is better because the content of the feature files is just the description and expected result, whereas the concodion files mix the description and tests with the layout of the result.

It seems like the end result of concordion could be acheived by parsing jUnit tests with a known format of JavaDoc and assertion messages. These could be parsed as the tests were run and then generate an HTML file - much like a JavaDoc - with the test results, which then can be styled appropriately. In fact, this could probably be done with annotations and reflection, without the need to parse the test code manually.

So far my thoughts on ATDD is that developers should spend time doing what they are best at, with the tools that they work best with - nine times out of ten this is writing code in their preferred IDE, not writing english or HTML-jUnit hybrids that will be run as tests. Perhaps my view of ATDD is skewed because I first used cucumber in the worst possible way. If I do end up using ATDD as part of my group project, I hope it is well managed and used appropriately - maybe I will come around to this way of testing.

Why I use Nginx

There are two very important reasons why I use Nginx to run my website:

  1. It was the first thing I used
  2. It has smaller config files than Apache

Even though I have been using it for quite some time, I didn’t really understand it - until I setup a second static hosting domain to host a Jenkins theme, which made me realise it’s not too bad.

The css would only be applied if the http headers were correct (ie it had text/css rather than just text/plain). Files servered though GitLab’s ‘raw’ mode have a text/plain header.

So this is my nginx config file, in sections.

http {
  include /etc/nginx/mime.types;
  passenger_root # Path to the passenger gem;
  passenger_ruby # Path to the ruby shim, from rbenv;

All of my config is in the http section. I’d guess that I can have other sections for different protocols, but this is just a basic web server so all I need is HTTP.

The include mime types line will make nginx serve static files with the correct Content-Type header for the file extension, which is why serving from this works for my Jenkins server but GitLab doesn’t.

server {
  location / {
    root /var/www/blog;
  }
}

This section defines a default server - anything that doesn’t match will just be sent to this, for example foobar.javanut.net will just go to the main blog. I could add more things in here if I wanted a subsection to go to somewhere else - maybe I wanted to serve some other content at javanut.net/my_stuff. I could just make a new location block and set the root to be a different location on my server.

server {
  listen 80;
  server_name static.javanut.net;
  root /var/www/static;
}

This is basically the same as the previous section, it’s just another static file server that points to a different folder. The main point here is that the server_name has been set, so that it is only accessible on static.javanut.net. In the previous example, the location {} block is probably unnecessary as it isn’t needed here.

  server {
    listen 80;
    server_name my_rails_app.javanut.net;
    root /var/www/my_rails_app/current/public;
    passenger_enabled on;
  }
}

Again this is very similar, but this is for a Rails app using passenger. Passenger needs to be installed when nginx is compiled - there is no plugin system for nginx.