Functions are amazing!

This blog post is also available on the Fortnox developer blog.

TL;DR: It’s actually pretty amazing how far you get using just functions.

For almost a year now I’ve had the chance to spend my time at work developing apps with React Native, and it has been a fun ride so far. Developing for the front-end is really refreshing for a back-end code monkey like me, and the instant-feedback you get from changes in the user interface is rewarding. Another cool thing about programming in JavaScript is that I get to write functional code, and so far I haven’t missed object-oriented building blocks like classes at all.

In a way, I find it to be liberating that I don’t have to worry about whether a particular method should be a class or an object method, or if the method makes sense in the context of a particular class. In functional programming, a function is just a function, and you assemble your program by composing functions to make the greater whole do useful things. Right now, I’m at a stage in my thinking where I’m starting to question the purpose of many of the concepts that we find in object-oriented programming. My thoughts can be summarized somewhat like the following: if I can write my business logic with just a few simple and reusable constructs, why should I feel compelled to use the more complex object-oriented toolbox?

I know, I know, there are no silver bullets, so functional programming (probably?) isn’t one, but for many problems we as programmers are trying to solve I think it is fair to question what we gain from the mental overhead that e.g. classes in object-oriented programming bring. In the object-oriented toolbox we find stuff like classes, class instances, instance methods, class methods, abstract methods, class inheritance, a myriad of design patterns to aid in the design of these abstractions, and much more. In the functional toolbox we find, well, functions. I think that in limiting the number of unnecessary abstractions we have a chance to reduce complexity in our code, and that’s something desirable, right?

Another cool effect from not programming with classes is that we get to think about data and behavior as two distinct things. Let me try to convey my thoughts of why this is neat via an example. In e.g. Clojure, a common way to represent some aggregate structure is to use a map. Maps are key/value data structures, just like objects are in JavaScript. Below we have an example of a map with two keys:

{:firstName "Martin" :lastName "Moberg"}

For a particular problem we’re trying to solve, the above example could be a perfect data structure to represent a person. It does not carry any behavior, it’s just key/value structured data. And it’s not an instance of any pre-defined class used as a cookie cutter because, again, it’s just structured data. Just toss the map as input to any function that know how to deal with maps to start working with it, and in Clojure, there are plenty of those. And we can choose to apply this way of thinking in JavaScript too.

Where I am right now in my understanding of programming, I’m thinking, that for many problems, programming with functions along with a way to control what parts of your code that gets exposed from a module gets you really, really far. In JavaScript, we have the export statement that lets you decide what parts should be included in the module’s API. With some opt-in static typing for arguments and return values I’d be an even happier JavaScript programmer, but still, as a programmer that have spent years with object-oriented programming, I find it pretty amazing how far you can go using just functions.


This blog post is also available on the Fortnox developer blog.

As developers we have a large selection of design patterns and techniques at our disposal when we design and implement our solutions. This is especially true in object oriented code; patterns such as decorators, builders and factory methods are just some of the patterns you probably see during your workday if you are coding with that paradigm.

Most of these design patterns have names that reveal their purpose. If we take the three patterns mentioned in the previous paragraph, the decorator pattern decorates an object with additional behaviour. The purpose of the builder and factory patterns are also reflected in their names; the builder pattern lets you construct an instance of a class using a fluent interface, while the factory method pattern is used to separate the construction of a complex object so that it resides in its own distinct method.

But then there are other techniques whose names do not convey their purpose as clearly. One example I would like to mention here is memoization, which is a concept I have stumbled upon several times during my days and figured that it must be something very mysterious. With a name like that it must be something really hard to understand, right?

Well not really. Memoization is basically an optimization technique in which a cache is used to store previous results of expensive computations. If the computation has occurred once, the following calls to the same routine will return the cached value instead of redoing all that work again.

In my attempt to understand how the technique works I decided to implement memoization in JavaScript, and this implementation can be seen below. The code defines a function called memoization, which closes over an initial numeric value. This value will then be used as the first operand for simple addition calculations. When you call memoization with a numeric value, that value will be used as the other operand and the result is returned. Previous results are cached according to the memoization technique.

let memoization = (val) => {
  let cache = {};

  return (other) => {
    if (!cache[other]) {
      // replace assignment below with your expensive computation
      cache[other] = val + other;
      console.log('Cached new value');

    return cache[other];

let plusFive = memoization(5);

// first call caches the result and outputs "Cached new value".

// second call reuses the cached result, i.e. no console output this time.

Now you’re probably thinking that a simple addition is not a very costly computation, and that’s true. The whole idea of this example was just to show how an implementation of the technique can look like. Apparently, the technique is useful for purposes other than optimization, but that is for you to dig into if you want to know more. :)

That’s all for now. Until next time!

Learning JavaScript properly

This blog post is also available on the Fortnox developer blog.

During the early 00s my programming was more or less about creating and maintaining web sites in LAMP setups. From that period I remember fighting many battles trying to make JavaScript code work across web browsers. It was not fun. For many years now I’ve kept my distance from the language, hoping for a better replacement to arrive at some point.

The thing is that JavaScript survived and is used extensively in all kinds of web setups, which is why I’ve decided to give the language another go. So I picked up JavaScript: The Good Parts (that have collected dust in a book shelf at home for several years) and saw this really nice presentation on functions in JavaScript recommended to me by a colleague at work. I read up on higher-order functions in the language and learned about some new stuff like arrow functions and constants (which are pretty cool). And then I decided to program something based on my understanding gained from all those resources.

My idea was to avoid the prototype stuff that I haven’t grasped fully yet and instead build something using functions and closures only. In the end I chose to build a simple snake game using JavaScript and React. Using React for a snake game is maybe overkill, but I found that the Create React App project gave me a nice bootstrapped environment for me to work in without the need to setup Babel, Webpack and all those other things you need to worry about. A few days in now there is a working version of the game that you can try here if you want. Just use the arrow keys or swipe to control the snake. Although the code is most probably far from idiomatic JavaScript, I decided to publish the code on GitHub, with the plan to ask some colleagues more experienced in the language to tell me what I’m doing wrong.

From this basic snake implementation I take with me a few insights. First, my time with the language was actually more pleasant than I had imagined. I believe that programming in Clojure and other functional programming languages have made me let go of the object-oriented paradigm a bit. This time around, when I decided to stick with using just functions and closures, the code felt more natural to write than when I tried to program like I do in Java.

Secondly, I miss working with persistent data structures and having robust ways to manage state changes. The ability to stay immutable takes away a lot of complexity not really related to the problem you want to solve and lets you instead focus on what’s important. You can get all this by simply using ClojureScript instead of JavaScript (as ClojureScript compiles to JavaScript) but I’ll save that subject for another post. 😉

A final and fun takeaway from this exercise is that the snake in a snake game can be modeled as a queue (along with a variable to control its growth when it feeds). Let’s have a look at how this works, starting with the algorithm to use each time the snake moves.

1. Push the next coordinate to the queue.
2a. If the grow variable is zero, then pop the first item from the queue.
2b. Else, decrease the grow variable by one.

Consider the following queue with x and y coordinates represented as tuples. JavaScript lacks both a tuple and a queue data type, so good old arrays will have to do.

let grow = 0;
const queue = [[1,1], [2,1], [3,1]];

So above we have a queue with three coordinates representing the snake. In this example, the snake is facing right, which means that the front of the snake is at [3,1] and the end of the snake is at [1,1]. Let’s move the snake one step to the right. According to the above algorithm, we push the next coordinate to the queue, which means that the queue now looks like this:

// queue is now: [[1,1], [2,1], [3,1], [4,1]]

The grow variable is zero, so we pop off the first element like so::

// queue is now: [[2,1], [3,1], [4,1]]

Alright, so now we have moved one step to the right! Let’s pretend that the snake found something to eat at [4,1], which means that the snake will grow. In the current version of the game, the snake grows by three whenever it eats, so we’ll increase the grow variable by three. Now, let’s move the snake to the right again. We first add [5,1] to the queue. This time, the grow variable is greater than zero, so we will not pop an item off the queue this time but rather reduce the grow variable by one. The outcome of the first movement we  just described is shown below.


// grow is now: 2
// queue is now: [[2,1], [3,1], [4,1], [5,1]]

As the snake keeps on moving the grow variable will eventually be reduced to zero. When that happens, we will start popping items off the queue again whenever the snake moves. I hope you found the snake discussion super interesting and life changing. 😉 That’s all for today, until next time!

Introducing Clojure

This blog post is also available on the Fortnox developer blog.

At work we have something called the ”Java forum” where we discuss Java and related technologies from time to time. A while ago I felt that I wanted to contribute with a beginners talk about Clojure for programmers. I’m no expert in the language but I know enough to present at least the fundamentals. So I signed up for a talk and went home to prepare a presentation.

What I came to realize in front of the computer was that it is pretty hard to present Clojure in like 20-30 minutes. To generalize a bit, I would say it is practically impossible to give a proper introduction to any language in that limited time. Clojure is particularly tricky in this sense; a LISP dialect with persistent data structures is pretty far from what Java or JavaScript programmers easily relate to.

I ended up changing my presentation material over and over again, excluding and including things back and forth, but in the end I think it turned out alright. I had to leave out sets, atoms and other cool stuff as there simply was no time to go wide or deep. Anyway, I figured that someone else out there might be interested in an example ”agenda” of how to introduce Clojure to developers in such a limited time frame, which is why I uploaded the material to a GitHub repo here. Each expression in the clj-file comes with comments and suggestions of what to say when evaluating them. I used the Light Table code editor during my presentation and that worked pretty well. With that editor you get a nice inline evaluation of your expressions, which is really handy while presenting.

I hope someone out there finds the material valuable in some way or another. Until next time!

A troublesome query

This blog post is also available on the Fortnox developer blog.

Last week we decided to have a look at one of our database queries that had started to become a real problem. The symptoms were super strange; occasionally during peak hours the query began to perform really, really bad, from around 200 milliseconds per query up to about 30000 milliseconds per query. Yes you read that right, we are talking about half a minute or at times even longer than normal. That’s like 150 times slower and of course not acceptable at all, so we went head in trying to figure out why. This blog post describe our findings and how we found a solution to the problem.

The first thing we did was to set up a local environment to try to recreate the problem there. We use Postgres and Java in our application, so we decided to recreate the problem in a test harness using JUnit with the same database driver as we use in production. After setting the connection pool size to 1 and repeatedly firing away the problematic query we were able to recreate the behavior quite easily. The symptoms were really weird; the first nine queries were perfectly fine performance-wise, but from the tenth query and onward the query started to take up to 150 times longer. What’s going on?

After tweaking the query, altering the prepared statement parameters back and forth and trying to understand the Postgres server logs, we finally started to grasp what was happening under the hood. The key breakthrough was probably when we came across this information concerning server-side prepared statements. It turns out that there is a special threshold that determines when server-side prepared statements should kick in. This threshold has a default value equal to 5, which means that the statement should be prepared ”often” on the server.

When we increased this threshold value a bit (to 10), the bad performance of the query instead occurred around the fifteenth time rather than on the tenth time. According to the PGStatement javadoc, setting the threshold value to 0 effectively turns off server-side prepared statements. We gave that a try and instantly the problem completely disappeared. Interestingly, the performance in our initial tests were not affected whatsoever. The query performed just as well using only client-side prepared statements. The poor performance of that query vanished completely and has not caused any problems in production since we turned server-side prepared statements off.

Server-side prepared statements might be a killer feature for some queries, but for the one we were struggling with it definitely was not. A very interesting finding we take from all this is a line of text from the PostgreSQL Extensions JDBC API documentation. It goes like this:

”You should be cautious about enabling the use of server side prepared statements globally.”

Okay, but it’s not like we enabled them globally; the default threshold value is 5, which means that they are enabled by default if you do not explicitly turn server-side prepared statements off. If the recommendation is to not enable them globally, a default value of 0 would in my mind have been more reasonable, right? Hopefully there’s a good reason for its current default value that someone more into databases than me has the answer to. In any case, as demonstrated in the API documentation referred to earlier, it’s quite easy to set the default value via the connection string, like so:

// 0 means that we do not use server-side prepared queries
String url = "jdbc:postgresql://localhost:5432/test?prepareThreshold=0";

I hope this blog post can help remedy some headache out there. Until next time!