Choosing a web framework for F# in 2019

19 February 2019

While researching available choices for building a RESTful API in F#, I came across a talk from Gien Verschatse that really stood out (and not just because she covered Freya, Giraffe and Suave, three frameworks that were already on my list).

For some context, Sweep is a platform for triggering, designing and managing e-mails from events in SaaS/landing pages/mobile apps. It's being spun out of another project, and may or may not be open sourced in the near future.

New framework. Who dis?

Gien's problem was simple.

You want to try a new framework for a fresh project. You've got any number of options, each with their own strengths, weaknesses and ideal use cases.

How do you choose which one is right for your project, when you don't have any experience with any of them?

"Try them all" isn't ideal. Few of us have the luxury of sinking days into learning the ins and outs of every new framework that pops out of the woodwork.

But we still need to do some research. The wrong stack can hamstring your project (like when development on NancyFX screeched to halt, stopping us from migrating to .NET Core).

So you want to do enough research not to make a bad choice, but not so much that you're endlessly descending rabbitholes in search of the latest and greatest.

It's a classic "exploration vs exploitation" optimization problem.

Gien has a pretty nice take on it, which I played around with in choosing the right framework for Sweep.

So whatcha whatcha want?

It's my site, and I'll put a Beastie Boys reference anywhere I damn well please.

Credit to https://www.flickr.com/people/76913520@N00

New framework have a lot of question marks hanging over them.

Does it do what I need? How performant is it? How well maintained is it? How easy is it to test?

Before answering these, it's important to understand the priorities for your particular project.

Many of these criteria - like "performance" or "activity" - are relative terms, not absolute. What's more, each framework usually adopts a set of trade-offs to favour one criteria against another.

Gien's point is that you see the broader context of your own project. She starts by listing the criteria she thought important for her given (admittedly toy) project:

  • readability
  • learnability
  • simplicity
  • activity
  • testability
  • longevity
  • practical examples
  • extensibility
  • performance
  • specialisation

Next, these are pair-ranked to determine relative importance (list all possible criterion pairs, then assign a point to the criterion that's more important within a given pair).

So if readability is more important then learnability, assign a point to the former. If readability is not as important as simplicity, assing a point to the latter. This produces a nice ranked list - the higher the score attached to a criterion, the more important it is.

Gien's initial criteria ranking per https://github.com/selketjah/risk-in-tech

What was important for Sweep

While Gien's list was pretty good, I had two additional criteria in mind.

Under the hood, Sweep is basically an API to receive events from third party code and handle email templates/logs, connected to a front-end that would allow for mail template design.

I knew this was going to involve a lot of boilerplate controllers and domain models.

I was interested to know how each framework fared when it came to development speed/tooling (i.e. how much code could be auto-generated?) and code reuse (i.e. could I share types or API specifications across both front- and back-ends?).

With this in mind, this is where priorities for Sweep ended up:

  • Speed/tooling - 10 (0.19)
  • Code reuse - 9 (0.17)
  • Practical examples - 8 (0.15)
  • Learnability - 6 (0.11)
  • Activity - 5 (0.09)
  • Testability - 5 (0.09)
  • Readability - 5 (0.09)
  • Simplicity - 4 (0.076)
  • Extensibility - 0
  • Performance - 0
  • Specialisation - 0

The parenthesized number is the normalized score (i.e. weighted percentage "importance" - something that will come in handy at the end of our assessment).

How do Giraffe, Freya, Saturn and Suave stack up?

Now that I had an idea of what would be important for Sweep, I needed to rank each framework against each of the chosen criteria.

This means using some Google-fu to quickly gauge how well a given framework fits our criteria, and then comparing each framework to see how they perform relative to one another.

Again, I agree with Gien that speed is the name of the game. We don't want to sink days into researching every available feature or benchmark. It's fine to pick a handful of examples - like the date of most recent Github commit, or the total number of contributors - to make this assessment.

I don't want to repeat Gien's conclusions for her demo project, so I'll just focus on the two items I added to the criterion list.

Speed/tooling

Ideally, I was looking for a F# framework that would allow some form of Swagger code-gen. Best case scenario, this would allow me to auto-generate API REST controllers from spec. Worst case, I'd still be able to generate the API client.

Swagger code-gen is supported for ASP.NET and NancyFX, but I wasn't sure whether this was the case for Freya, Saturn, Suave or Giraffe.

Digging around for 30 minutes or so, I concluded that none support was unfortunately pretty thin.

No information was available for Freya, so I assumed this was unsupported.

Likewise, there was a lonely, year-old open Saturn GitHub issue requesting support for doc-gen. Given the length of time that had elapsed, I assumed this was a dead end.

Suave.Swagger, however, seemed to be a reasonably mature package for doc-gen. Code-gen didn't seem to be on the cards.

Swagger doc-gen for Giraffe was being developed - seemingly from the author of Suave.Swagger - but still unreleased. However, a few reports suggested it was still usable, and a release seemed to be on the cards for the near future.

What's more, Giraffe could act as middleware for ASP.NET Core. Since Swagger codegen was supported for ASP.Net, I expected I'd at least be able to leverage code-gen for ASP.NET controllers and domain models, then plugin Giraffe for the data layer and business logic.

Admittedly, this would mean splitting the code base between F# and C#, but since the latter would be auto-generated, I figured that wouldn't be a huge issue.

GraphQL

One issue I haven't touched on was support for GraphQL. This was something of which I had little knowledge - basically, a few blog posts had caught my eye but I had never even played around with it, let alone rolled out into production).

While I had no plans to leverage GraphQL for Sweep, I thought it worthwhile to note if any of those frameworks were preferred for it.

Ultimately, they all seemed to be the same as far as GraphQL is concerned.

Code reuse

For various reasons, I was already committed to Vue.js on the front-end.

I knew this was going to hamper code reuse betwen front- and back-ends (for example, this meant that WebSharper was never on the table as an option, or the SAFE Stack more generally).

During the course of my research, I had become aware of Fable, a F#-to-Javascript compiler (similar to ClojureScript).

Fable seems like an awesome project, but its bindings for Vue.js (Fable.Vue) sounded a little immature (for example, Single File Components were not yet supported).

That being said, I don't think it mattered. There didn't seem to be any material difference between the four frameworks when it came to code reuse.

Ranking time

Gien took a similar approach to scoring/ranking the frameworks against one other. For each criterion, assign (N-1) points to the framework that performs best agains that criterion, (N-2) points to the second favourite, and so on.

By multiplying the scores by the criterion weights then taking the sum, you get a score for each framework. The highest indicates the framework that best matches best-performing framework given the criteria you selected.

Gien's final framework ranking per https://github.com/selketjah/risk-in-tech

For Sweep, Giraffe came out as the clear winner (although for certain criteria like learnability and testability, i couldn't detect any real difference between the options).

Conclusion

When choosing a new stack or framework, you can sometimes feel like you're stuck in an endless maze.

I highly recommend watching Gien's talk for some tips on making quick, informed decisions.

These are all rough rule-of-thumb heuristics, but I think they worked pretty well for me.

I found it useful to verbalize/quantify the criteria that are important for your given project.

I'd normally just "listen to my gut", but I like the additional rigour of comparing each criteria pair one by one.

Rather than jumping in headfirst, I recommend thinking through/fleshing out the priorities for your project. Although this requires slightly more time upfront, in the long-term it minimizes the risk of sinking time and effort into a framework ill-suited to the needs of your project.

Next
The anatomy of Zapier's e-mail outreach funnel