Monday, 13 February 2012

Sometimes you just have to be lazy...

...or, solving problems with lazy evaluation.

I’ve recently been working on a simple template system, which also has the potential to grow into a full-blown document generation system. The basic principle is that a document is composed of a number of fragments. Each fragment is called in turn to generate its piece of the document and these pieces are concatenated together to product the final result.

My first implementation was a super simple one. We just fold over the list of fragments, accumulating the output from each into a string builder. Each fragment may alternatively generate an error, which we collect along the way as well. At the end of the fold we return either the string document or a list of errors.

  def render(fragments: List[Fragment]): Either[List[String], String] = {
    fragments.foldLeft(RenderResult()) { (acc, fragment) =>
      fragment.generate fold (
        { err => acc.copy(errors = err :: acc.errors) },
        { text => acc.doc.append(text); acc }     
      )
    }.toEither
  }

  case class RenderResult(doc: StringBuilder = new StringBuilder(), errors: List[String] = List()) {
    def toEither: Either[List[String], String] = ...
  }

Now, this works great for documents of only a few kilobytes in size. However, as the documents grow to multi-megabyte sizes this approach becomes infeasible. The amount of memory required to hold the document being generated becomes huge. Multiple concurrent generations become impossible. We need a better solution.

The typical suggestion at this point is to stream output from each fragment straight into an output writer of some form rather than collect and build a string.

  def render(fragments: List[Fragment], writer: Writer): Option[List[String]] = {
    fragments.foldLeft(List[String]()) { (acc, fragment) =>
      fragment.generate fold (
        { err => err :: acc.errors },
        { text => writer.write(text); acc }     
      )
    }.getErrors
  }

Unfortunately this doesn’t work here because of the possibility that fragments may generate errors. We don’t want to stream out partially complete documents. We could output to a temporary file and then copy this to the ultimate output on success, but this seems like a less than ideal solution.

A common approach in an imperative style would be to first run a check phase to see if there are any errors and then run a second render phase to produce the output.

  def check(fragments: List[Fragment]): List[String] = 
    fragmens.foldLeft(List[String]()) { (acc, fragment) =>
      fragment.checkForErrors map (_ :: acc) getOrElse acc
    }

  def render(fragments: List[Fragment], writer: Writer): Unit = 
    fragments foreach { fragment => writer.write(fragment.render) }

However, in this case the building of fragments might be quite costly and we don’t want to have to process them once to see if there is an error and then again to render.

However, as we are using Scala and programming in a functional style, we have an alternative approach. Instead of returning a string or writing direct to a writer we can return a lazily evaluated function. This function can encapsulate all the side-effect generating logic of doing the actual write - which is also nice from the point of view of being able to reason about our rendering code.

  def render(fragments: List[Fragment]): Either[List[String], (Writer) => Unit] = {
    fragments.foldLeft(RenderResult()) { (acc, element) =>
      fragment.generate fold (
        { err => acc.copy(errors = err :: acc.errors) },
        { func => acc.copy(fragmentFuncs = func :: fragmentFuncs }     
      )
    }.toEither
  }

  case class RenderResult(fragmentFuncs: List[() => String] = List(), errors: List[String] = List()) {
    def toEither: Either[List[String], List[(Writer) => Unit] = {
      ...
      (writer: Writer) => fragmentFuncs foreach { f => writer.write(f()) }
      ...
    }
}

The way this works is that the render method folds over each fragment, asking the fragment to do enough work to be certain that no errors will occur (e.g. validate data, take a temporary copy of any volatile values and so on). However, it doesn’t need to do the actual string generation. Instead, each fragment returns a function that will be called at some time in the future (or an error). The render code then accumulates all these small functions into one larger function and returns this.

The returned function, when called, takes a writer and iterates over all the small string generating functions, writing the output of each straight into the writer. Job done and in a mere fraction of the memory required by the original solution.

In cases where you need to generate large amounts of data or where you want to delay side effecting execution to a well-defined location in your code then lazy evaluation is a great approach to follow.

NOTE: All above code example are significantly simplified from the real code in order to demonstrate the approach.

Saturday, 4 February 2012

Some TDD Advice

I’ve been doing some interviewing of new developers. Our company approach is to undertake a small pairing exercise with the candidate. We pick a relatively simple code kata and ask them to complete this. They drive the keyboard and do most of the work, while we, the interviewer, observe, ask questions, help point out obvious problems and provide assistance if necessary.

The only main requirement for these pairing sessions is that all work is done using a Test-Driven Development (TDD) approach. We do this for two reasons:

  1. We TDD all our code and therefore want to be comfortable that the candidate can work in this way.
  2. Building a system gradually using tests gives us a good insight into how the person thinks, how they go about solving problems and how they evolve their code.

Unfortunately, we are finding that many of the candidates are struggling with these sessions. They tend to start out strongly and then a few tests into the problem they become bogged down or stuck on furthering the solution.

I found this slightly puzzling, so I decided to undertake one of the katas following the exact same approach as we expect the candidates to follow. I picked a kata from one of the other devs that I had not seen before, so as to not prejudice my result. I started small with a simple test and gradually evolved the solution one test at a time. I was able to complete the kata easily within the allotted interview time and the code I produced was (IMHO) very clean and well structured.

While doing this I noticed one major difference between my approach and that of the people I have been interviewing. I follow a cycle as follows: write the test -> implement the code to make the test pass -> refactor the code to make it better -> move on to the next test. The devs that I have been interviewing who struggle miss out one of these key steps. Namely, they do: write the test -> implement the code to make the test pass -> move on to the next test. They miss the refactoring step at the end of each cycle.

What this tends to result in is that they easily build a first pass at some code to make a test pass. Then they build the code to pass the next test on top of this. This works for three of four iterations - but by this time their code is becoming quite complex and messy. Then as the requirements get more complex they struggle to change their code, understand how it actually works and how to change it to pass the next test.

At this point they either become so bogged down that they fail to make any progress, or they start a massive refactoring exercise that breaks loads of code and most of their previously working tests. Either way, they pretty much fail the interview at this point.

By adding a small refactor step into every cycle we ensure that the code we implement to pass the next test is always built on a solid foundation of well structured, clean code from the previous one. Doing this would make so much difference to those interview candidates.

Let’s finish by looking at an example of the difference this approach makes:

In this example we are working on a kata to build a speaking clock. For example: 0:00 speaks 'midnight', 15:30 speaks 'half past three', 11:45 speaks 'quarter to twelve' and so on. After writing code to pass the first couple of tests we have something like:

class SpeakingClock {

  def speak(time: DateTime): String ={
    val hour = time.hourOfDay().get()
    val minute = time.minuteOfDay().get()

    if ( hour == 0 && minute == 0 ) "midnight"
    else if ( hour == 12 && minute == 0) "noon"
    else "unknown"
  } 
}

The next tests would be to cover a case for speaking the o'clock and half past times. Our candidate would most likelky go on to just add to the above code, something like:

class SpeakingClock {

  def speak(time: DateTime): String ={
    val hour = time.hourOfDay().get()
    val minute = time.minuteOfDay().get()

    if ( hour == 0 && minute == 0 ) "midnight"
    else if ( hour == 12 && minute == 0) "noon"
    else if ( minute == 0 ) hourToText(hour) + " o'clock"
    else if ( minute == 30 ) "half past" + hourToText(hour)
    else "unknown"
  } 
}

Then we start adding tests covering quarter past and quarter to. At this point the algorithm starts to get more complex and the if/else block starts growing out of control. Refactoring this large if/else block becomes a major exercise under interview conditions and it is at this point that the solution invariably goes wrong and the candidate starts to struggle.

Now, by doing some refactoring after each step, we can change the first piece of code to something like:

class SpeakingClock {
  
  private val FixedTimes = Map[(Int, Int), String]((0, 0) -> "midnight",
                               (12, 0) -> "noon")

  def speak(time: DateTime): String = {
    val hour = time.hourOfDay().get()
    val minute = time.minuteOfHour().get()

    FixedTimes get ((hour, minute)) getOrElse {
      "unknown"
    }
  } 
}

Now we don't need to worry about the first case any more. It's nice and clean and out of the way. Also, if there are any new fixed times (say 17:00 is 'dinnertime') then we can easily add these to the solution with minimal additional code. Next, we add in the support for speaking the o'clock times:

class SpeakingClock {
  
  private val FixedTimes = Map((0, 0) -> "midnight",
                               (12, 0) -> "noon")

  private val HourNames = "twelve" :: "one" :: "two" :: "three" :: "four" :: "five" :: "six" ::
                          "seven" :: "eight" :: "nine" :: "ten" :: "eleven" :: Nil

  private val HoursToNames = (0 to 11) zip HourNames toMap

  def speak(time: DateTime): String = {
    val hour = time.hourOfDay().get()
    val minute = time.minuteOfHour().get()

    FixedTimes get ((hour, minute)) getOrElse {
      val adjustedHour = if ( hour > 11 ) hour - 12 else hour
      val hourName = HoursToNames(adjustedHour)
      "%s o'clock".format(hourName)
    }
  } 
}

Then we again go through the refactoring step and end up with something like:

class SpeakingClock {
  
  private val FixedTimes = Map((0, 0) -> "midnight",
                               (12, 0) -> "noon")

  private val HourNames = "twelve" :: "one" :: "two" :: "three" :: "four" :: "five" :: "six" ::
                          "seven" :: "eight" :: "nine" :: "ten" :: "eleven" :: Nil

  private val HoursToNames = (0 to 11) zip HourNames toMap

  def speak(time: DateTime): String = {
    val hour = time.hourOfDay().get()
    val minute = time.minuteOfHour().get()

    FixedTimes get ((hour, minute)) getOrElse {
      "%s o'clock".format(nameForHour(hour))
    }
  } 

  private def nameForHour(hour: Int) = {
    val adjustedHour = if ( hour > 11 ) hour - 12 else hour
    HoursToNames(adjustedHour)
  }
}

By pulling out the nameForHour method we leave code that is much better structured and cleaner. It's then so much easier to see what changes to make in order to pass the next test. By growing code in this way we end up with a working solution that is also always in a clean and tidy state.

So, the moral of this story is that if you are doing TDD, in an interview or not, don't forget to refactor after each test passes to ensure your code is well written before moving on to the next test.