Continuing the Pipeline with tee

I always seem to end up writing the same adapter functions over and over again when starting a new project. The functions are usually one or two lines and could easily just be used as lambdas throughout the application. I have found that giving these small functions a name can help make things more succinct. A lot of those functions can be found in FSharpx.Extras, specifically in Prelude.fs.


You’ve written your application and have a nice clean pipeline to define the flow of your operations. All of a sudden you need to add a side-effect in the middle of that pipeline. It could be a log to file, console, or even send some events. We have some choices on how to handle this. We can break the pipes into multiple blocks with let expressions and then evaluating the side-effect. There is nothing wrong with this approach, and may be the best solution. In some simple cases we can use the tee function.

let tee (f : 'a -> unit) x = 
    f x

The tee function acts like a plumbing “T” connector allowing the contents to pass through the side-effect without breaking the pipeline. We can see tee is a higher order function that takes a function as a parameter. It then calls the passed in function with the secondary parameter as its argument. We are disregarding the resulting evaluation and then returning x.

A small example would be adding debug logging.

let processData (outputLogger : 'a -> unit) someKey =
    |> getData
    |> transformData
    |> tee outputLogger
    |> finalProcessing

We have a processData function that takes a logging function as an argument. We then wedge the outputLogger into our pipeline using the tee adapter function.

We could have used a lambda to hook in the logging. I’ve done that in the past and still on occasion. I do believe there is value in knowing the names of these small and specialized functions. They have one purpose and do it well. I find I don’t have the mental overhead of entering the lambda when coming back to the code at a later date.

FParsec: Alternative or Choice

We left off with a working solution for parsing either “po/k”, “x4/0”, and “s12” into our choice type of Operation. We can always go back and look at our original solution here. We really want to be able to parse a bunch of operations separated by a comma.

Parsing Many

When we want to parse a large blob of operations separated by a comma we can use the sepBy function.

  • sepBy : Parser<'a, 'u> -> Parser<'b, 'u> -> Parser<'a list, 'u>

We can see by the function signature, it takes two parsers as arguments. The first parser determines the result. The second parser determines the separator. Our operations are separated by a comma so we’ll use pchar ',' parser. Let’s combine the sepBy with our previous function parseOperation and can now parse an entire blob of operations.

"po/k,x4/0,s12,x7/6" |> run (sepBy parseOperation (pchar ','))

Alternative or Choice

Revisiting the parserOperation function of

let parseOperation = pspin <|> pexchange <|> ppartner

we could have used the choice : seq> -> Parser<'a,'u> function. choice takes a collection of parsers and evaluates them in order to find a match. This can help with readability when having multiple alternative parsers. Instead of weaving the <|> operator between them. We can re-write our function as:

let parseOperation = choice [ pspin; pexchange; ppartner ]


We’ve replaced our alternative operator <|> with the choice function for readability. We’ve also introduced the sepBy function to parse a block of comma separated operations. The simplest way to test our parsers is by using the run function as it does not require or use any state.

Full Code

Starting Small with FParsec

I had used regular expression to process the inputs for Advent of Code 2017 Day 16. I thought it would be a good learning example to try FParsec. My goal was to take the sample strings given, and convert them into a choice type (discriminated unions).

Sample String

We’ll be processing inputs for three different operations. A sample input looks like this “po/k,x4/0,s12”. There are three operations which need to be parsed. We’ll create a choice type (discriminated union or sum type) to represent our operations.

We would like to parse the inputs accordingly.

"s12"  -> Spin of int
"x4/0" -> Exchange of int * int
"po/k" -> Partner of char * char

FParsec Operators

Parsing the first sample of "s12", we can use the function below.

let pspin = pchar 's' >>. pint32 |>> Spin

We are matching the first character using the pchar function. We combine it using the (>>.) operator with the pint32 function. This is all followed by (|>>) into the Spin constructor.

Breaking down the functions:

  • pchar : char -> Parser<char,'u> attempts to match a character to the one supplied. In this case we are looking for the character ‘s’.
  • pint32 : Parser<int32,'u> will convert the text to an integer providing it is a legitimate.
  • (>>.) : Parser<'a,'u> -> Parser<'b,'u> -> Parser<'b,'u> combines 2 parsers and returns the second parser result, pint32 in our case.
  • (|>>) : Parser<'a,'u> -> ('a -> 'b) -> Parser<'b,'u> is like a map function. It unwraps the value in the parser, applies the function supplied, and packs it back into the parser. We take the integer value in the parser and create a Spin choice

Parsing the Exchange

We can parse “x4/0” and convert it to an Exchange type with the function:

let pexchange = 
    pchar 'x' 
      >>. tuple2 
            (pint32 .>> pchar '/') 
      |>> Exchange

There are some familiar functions of (>>.), (|>>), pchar, pint32, and (|>>).

Let’s explore the new functions.

  • tuple2 : Parser<'a,'u> -> Parser<'b,'u> -> Parser<('a * 'b), 'u> takes two separate parsers and packs their results into a tuple.
  • (.>>) : Parser<'a,'u> -> Parser<'b,'u> -> Parser<'a,'u> similar to what we used earlier, however this one returns the first parser result.

Parsing the Partner

Our final sample “po/k” to convert to a Partner type.

let ppartner = 
    pchar 'p' 
      >>. tuple2 
            (asciiLetter .>> pchar '/') 
      |>> Partner

This is fairly similar to the Exchange parser function pexchange. We did introduce the new function of asciiLetter. As the function name implies, this parser accepts any letter.

Putting them Together

We have our 3 parser functions, each handling their specific sample input. Now it’s time to combine them into a parser that can handle either sample and return the appropriate choice. We’ll do that using the (<|>) operator.

let parseOperation = pspin <|> pexchange <|> ppartner

The (<|>) operator allows us to define alternative parsers. This can be thought of like a pattern match for parsers.


Parser combinators let us build up our parsing functions by piecing smaller functions together to create the overall parser. We’ve covered quite a few functions in this small example. Regex will likely work as well for this type of example, however I’m sure there are more complex solutions that can be elegantly built with parser combinators.

Full Parser Example

FParsec and FSI

I like exploring ideas and new libraries using the REPL (Read Evaluate Print Loop). There’s something about being able to get started quickly without having to plan out an entire project. I started looking at parser combinators. It’s been on my “to learn” list for some time now. I started with FParsec for .Net and hit a couple of snags getting things running with F# Interactive. The FParsec documentation has the solutions to these issues but I thought I would put them here so I can remember.

Library Load Order Matters

FParsec uses 2 libraries, FParsecCS.dll and FParsec.dll. Loading them up in FSI should be fairly straight forward, except you must load them up in the correct order.

#r "FParsecCS.dll"
#r "FParsec.dll"

Failing to do so and you will be greeted with a message similar to this:

error FS0074: The type referenced through'FParsec.CharStream`1' is defined in an assembly that is not referenced.

Source: FParsec Tutorial – Two DLLs
"If you reference the DLLs in the F# Interactive console, you need to reference FParsecCS.dll before you reference FParsec.dll."

Value Restriction

A value restriction occurs when the Type Inference determines the type to be too generic. When working with the REPL the first thing to do is add type annotations. With FParsec, the recommended way to alleviate the value restriction is to add a type alias and use it in the type annotation. I wasn’t using any State while learning to parse the text. I created a type alias for unit and added it to the type annotation. My code looked like below.

type NodeState = unit

let pname : Parser = manyChars asciiLetter .>> spaces

Source: FParsec Tutorial – Value Restriction


I hit two small hurdles when using FParsec with the REPL. The load order of the two FParsec libraries matter. The other hurdle was a value restriction in which I created a type alias to annotate the parsing state for the combinators. After dealing with the library load order and value restriction, I was on to learning FParsec in F# interactive.

Advent of Code 2017 – Day 3 Part 2

Advent of Code 2017 Day 3 part 2 was quite the gruelling exercise. We needed to locate the first memory value larger than the input we are given. The memory locations store an accumulative value (still spiralled) based on its existing neighbours. This is the example given.

147  142  133  122   59
304    5    4    2   57
330   10    1    1   54
351   11   23   25   26
362  747  806--->   ...

Location one still starts at 1. Location two only has one neighbour with a value of 1, so it get’s a value of 1.

1   1

Location three gets a value of 2 since it has neighbours one and two in which both have a value of 1.

1   1

Location four has a value of 4. This is because it has three, two, and 1 as neighbours. We then continue on to generate the rings.

Once we get passed the initial ring, we can use some general rules to generate the values. At most a memory location will have 4 active neighbours. It came down to identifying the unique location requirements.

Since a location may potentially have neighbours in its own ring and the previous ring, we need to keep the last ring generated. I initially thought about the convoluted code that would need to be created, but decided to use Active Patterns.

I was able to determine the points of interest by using some of the data I had obtained from part 1. Specifically the row length. This allowed me to identify whether I was in a corner, before the corner, or after the corner. Once that was established I needed to handle the first and last element in the ring. Everything else was considered general middle elements.

I’ve included a couple of helper functions to handle wrapping the previous array’s index and generating the index offset from the current ring to the previous.

I opted to use mutation for generating a ring. I have my active patterns in the pattern match to help with code readability.

With all the building blocks in place I can now create a sequence of memory values based on the spiral structure.

memoryValues |> Seq.take 12 |> Seq.toList;;
val it : int list = [1; 1; 2; 4; 5; 10; 11; 23; 25; 26; 54; 57]

The part was to take the first element greater than my input.

Advent of Code 2017 – Day 3 Part 1

Day 3 part 1 was about calculating the Manhattan Distance for a specific memory location to memory location 1. The memory blocks are stored in a spiralling out formation with the first block being in the middle of the grid. The memory blocks would look like this:

17  16  15  14  13
18   5   4   3  12
19   6   1   2  11
20   7   8   9  10
21  22  23---> ...

I decided to try and figure out an equation for calculating the locations and distance instead of using the brute force method. (It wasn’t until later I learned about the squares in the bottom right hand corner… oops)

Generating the Rings

The equation I came up with for generating a ring ended up being:
ringNumber * 2 * 4 + (sum of previous rings).

I would generate the sequence of rings with an index and the final value.

I could calculate the number of elements in a ring wall (middle and 1 corner) by taking the ring number and multiplying by 2.

Identifying location 1 as ring Zero. I could say ring 1 would have 2 elements to make the middle and one corner. To make the full cube I could multiply it by 4. This gives us 8 elements in ring 1. Ring 2 would have 2 * 2 * 4 elements to make the square, 16. In the example below, there are 8 ones in ring 1, and 16 twos in ring 2.

2   2   2   2   2
2   1   1   1   2
2   1   0   1   2
2   1   1   1   2
2   2   2   2   2

To get the final memory location I would add the previous ring’s final location to the number of elements required for the current ring.

Running the code would result in:

rings |> Seq.take 6 |> Seq.toList;;
[(0, 1); (1, 9); (2, 25); (3, 49); (4, 81); (5, 121)]

Determining the Distance

The manhattan distance can be calculated by the ring id and the element offset from the centre of a ring wall. Here is the code I used to determine the distance.

The premise of this part is to locate the ring before the passed location. We then determine whether the location is on a corner or on the low side of centre. This will determine the amount of steps taken to get to the centre of the ring wall. We then add the number of rings we need to step through to get to location 1.


Computing the rings using square:

let rings = Seq.initInfinite (fun i -> (i, pown (i * 2 + 1) 2))

Creating an Elmish Fable Project

Fable is a compiler that lets you write your web UI in F# and change it to Javascript. This has development benefits of immutability, static types, and discriminated unions (Sum Types). These benefits alone make for more predictable functions and fewer bugs. There is also a library that let’s us use an Elm like architecture for added stability and fun.


We need to install a few things before we can create the project:

  • .Net Core 2
  • NodeJS
  • Yarn (package manager)

Installing the .Net Core Templates

In your console, enter the commands:

> dotnet new -i Fable.Template
> dotnet new -i Fable.Template.Elmish.React

Creating and Running Our Elmish App

After the templates are installed we can create and run our Elmish app by using the commands:

> dotnet new fable-elmish-react -n MyProjectName
> cd MyProjectName
> dotnet restore
> cd src
> yarn install

With all the dependencies installed we can now run the app using:

> dotnet fable yarn-run start

Open your browser and navigate to localhost:8080.

If we just want to build the app:

> dotnet fable yarn-run build

The Immutable Tool

Immutability is a useful tool. Think of it as an extension of your favourite editor or ReSharper. Using immutable values helps eliminate some of the easy to make mistakes when programming. Even if your language doesn’t fully support immutability you can still take advantage of the benefits.

What Is Immutability?

Immutability is something which does not change. You set a value and it doesn’t change through its lifetime. There are absolutely no mutations of that value. Unexpected mutations are a very common cause of issues in a program. Mutations can happen seemingly anywhere. They can happen concurrently from another thread causing all sorts of issues which are difficult to track down. During my talk I said “there is nothing wrong with global state, the problem is when global state changes.” I was trying to make them think about what the real danger was in their applications. A value which cannot change is immutable.


The main benefit with using immutable values is the removal of unexpected changes. This leads to easier to reason about code. Future readers will have confidence that the values set don’t change later in the function or by any function. We limit the mutation or changes to specific areas of the code. This in turn let’s us define locations as being dangerous areas. Solutions using immutable values require less effort to move to a concurrent paradigm. Mutation is the bane of concurrent programming. Limiting mutation lowers the complexity of understanding the solution.


The main disadvantage of immutable values is always having to make a copy of the values or object to apply the changes. This is not a big disadvantage if the language supports immutability. It is tedious in a language that doesn’t as this would have to be implemented by the developer. There are also some performance issues that may be noticed in mutations of many values in a collection. This is due to all the copying, updating, and garbage collection. This again can be handled by the language if supported. Mutations are a form of optimization.

Levels of Immutability

There are many functional languages that have rigid no mutations allowed. Some notable ones are Haskell and Erlang. If you come from a language that heavily relies on mutating variables, then these rigid languages can really force you to think of different ways to solve the task at hand. There are middle ground languages that support both immutable and mutable values. F# and Clojure are good languages that are immutable by default. It’s up to the developer to mark the values as mutable and accept the responsibility of that decision. Using an immutable by default language will allow you to design your solution using immutability and later modify to using mutations if the performance is required.

What’s nice about Clojure and F# is you get visual cues as to the dangerous areas in your code. F# uses the mutable keyword and a left arrow <- to define mutation.

let mutable x = 5
x <- 6

Clojure uses the earmuffs operator.

(def ^:dynamic *there-be* "dragons")


Immutability is a tool we can use to help avoid easy to make mistakes of unintentional mutations. This can lead to solutions that are easier to reason about as the code is not littered with mutations. Limiting mutations to specific areas allowing the reader to focus on those dangerous areas instead of worrying about unexpected changes. Like any tool, there are times when it doesn't fit. If performance is paramount to the solution, using mutation may be the way to go. In saying that, it's generally a good practice to make the code correct and then add mutations to make it fast.

There and Back Property

Coming up with properties can be difficult and require a lot of thought. We need to really understand the code or process we are testing against. Thankfully there are some common patterns that our predecessors have identified to help get us started. We’ll start looking at the there and back property.

There and Back

This is a property we can use to verify anything that sets and gets values. Verify data transfer objects or writing and reading from disk. We can use this property to any process that performs a transform that can be reversed. Serialization and deserialization are a good example of a transformation with a reversal. We can serialize and deserialize the data and confirm we get the exact same data that we put in.

I went searching the internet for a good example to test the there and back property. I ended up on the MSDN website with an encryption/decryption, example. I’ve experienced some pain with an encryption library in the past, so this was perfect. Long story short, we had to use a 3rd party encryption library and 8 months into production we found that every key had a subset of values that would not decrypt.

Encryption Library

I placed the encryption code in a C# library which can be found Property Based Testing – There and Back. The project can be built and there should be a ThereAndBack.dll in the bin/debug directory.


The property based testing library for .Net is FsCheck. The FsCheck library can be downloaded using nuget or paket. We can now reference FsCheck and the ThereAndBack.dll built from the downloaded project.

The function ``Check there and back of DateTime encryption`` is the property we are asserting. The body of the function creates an instance of the Encryption object from the C# library. It then converts the input DateTime to a string, encrypts, decrypts, and parses it back to a DateTime. The result of that process is then compared to the original input DateTime value. This gives us our “There and Back” property.

We execute the function with Check.Quick. FsCheck executes the function 100 times while generating a new DateTime value each time.


Starting with the “There and Back” property is a good way to get introduced to property based testing. This property can be used for any data transformation that has a means to reverse the data back to its original form. This makes the property good for testing any new serialization library we may be using, or writing. We can also use this property for testing writing and reading from disk to find corruption or hardware errors.

What Is Property Based Testing

I originally looked into property based testing a while ago. I knew that it was something I wanted to come back to and dig deeper. I got my chance and recently gave a talk on it at Prairie Dev Con Deliver. This conference is focused on Agile methodologies and testing.

Property Based Testing

Property based testing differs from unit testing in that it doesn’t use specific inputs and results. It instead uses a property to determine correct behaviour. I looked up the definition of Property in the dictionary and came up with “a trait belonging to a process with results common to all members of a set of inputs.” This is still a little abstract, but I found adding an example helped me understand further.

First Property

Any positive number multiplied by negative one will have a result less than zero. In this example:

  • The process is multiplication with negative one: (*) -1
  • The input set is any positive number: choose {1 .. }
  • The result is a number less than zero: < 0

These three pieces make up the property we are asserting.

Generative Advantage

One of the big advantages to property based testing over normal testing is the generative aspect of it. Property based testing doesn’t rely on specific example based testing. The system generates values based on the specifications required. Since the system is generating values at each run, the tests are always actively looking for defects. This also lets the system come up with many variations of inputs that we humans may not think of. This means we may detect a defect that can occur when given a large set of data.

A nice feature to some testing libraries, such as .Net’s FsCheck, is shrinking. When a falsifiable input is found, Shrinking allows the system to start minimizing (shrinking) the input values required to generate the defect. The idea is to make the inputs be as manageable as possible for the developer to be able to reproduce the error with minimal effort. The ideal use case would be to take that falsifiable example and put it in a unit test.

Like unit tests, we come up with many different properties of the process we are testing. The difficulty with property based testing is coming up with properties can be time consuming and difficult. On the other hand, the time consuming and thought does allow us to gain greater knowledge of the process we are testing. Thankfully though, there are some common patterns that have emerged as good ways to get started. We will look at the There and Back property in the next post.


Property based testing is a unique way to ensure your code is as defect free as possible. We’ve defined three requirements for a property. The process is what we are testing. The input set or subset required to test the process. In the example above we used any positive number. The results with some defined commonality of evaluating the process on our input set.