Functional Programming Baby Steps
In a recent project, I was performing some basic array transformations in js. In my PR, someone pointed out that my approach was excessively complex and hard to follow. It was correct, but there was a cleaner way to represent the data transformation. The suggested approach was more functional, compared to my procedural approach. Let’s look at a simple concrete example.
Example
Imagine that we have a line representing values from 0 to 100:
|-------------------------------|
0 100
Dividers can be placed onto the line to define segments or ranges. The dividers
represent points on the line, while the segments
represent the spaces between the dividers.
40 60
|-----------*-------------------|
0 40 100
In this configuration, there is a divider at the point 40. This results in two segments with values of 40 and 60, respectively. I’ll look at another example before I attempt to derive a pattern.
20 20 60
|------*-----*------------------|
0 20 40 100
With a couple of basic examples, the relationship between the dividers
and the segments
starts to become pretty clear. Given either dividers
or segments
, we can convert from one to the other.
Dividers to Segments
Given a list of dividers
const dividers = [ 20, 40 ]
We can calculate the resulting segments
in a few steps. If we treat the min (0) and max (100) as implicit dividers
, the full set of dividers
would be
const dividers = [ 0, 20, 40, 100 ]
So the segments
are simply the set of values resulting in subtracting a given divider
from the previous divider
. In this example:
0 - NaN 20 - 0 40 - 20 100 - 40
\ / \ / \ / \ /
NaN 20 20 60
Omitting the first invalid value, we have segment
values of
const segments = [ 20, 20, 60 ]
To program this procedurally, we could do something like this:
const procedure = dividers => {
// define output
var segments = [];
// add the implicit dividers to input
var withImplicitDividers = [0, ...dividers, 100];
// iterate through array
for (var index = 1; index < withImplicitDividers.length; index++) {
segments.push(withImplicitDividers[index] - withImplicitDividers[index - 1]);
}
return segments;
}
This is clear enough, but it could be simplified quite significantly using some functional techniques:
const moreFunctional = dividers =>
[0, ...dividers, 100].map((value, index, values) => value - values[index - 1]).slice(1);
This is really just a shorthand version of the previous approach. We still create a modified starting array containing the implicit dividers
. But instead of explicitly iterating, calculating a value, and pushing that value to an output object; we use the map
method to perform the same operation for each element in the array.
Segments to Dividers
Converting in the other direction is slightly more complex. Given a list of segments
const segments = [ 20, 20, 60 ]
We can transform the data into dividers
by adding each segment
to the total of the preceding segments
. Again, we can use implicit segments
for clarity:
const segments = [ 0, 20, 20, 60 ]
So the calculation would be
0 + NaN 20 + 0 20 + 20 60 + 40
\ / \ / \ / \ /
NaN 20 40 100
The result includes an invalid value and 100, which is one of the implicit dividers
. So both of these can be omitted. Procedurally, this could look something like:
const segmentsToDividers = segments => {
// initialize output
var dividers = [0];
// add the implicit segment to input
var withImplicitSegments = [0, ...segments];
// iterate through array
for (var index = 1; index < withImplicitSegments.length; index++) {
dividers.push(withImplicitSegments[index] + dividers.reduce((total, current) => total + current));
}
// trim first and last elements
return dividers.slice(1, -1);
}
This is more convoluted, but it is still clear enough to follow. We use both the input data and the derived output
data to calculate each value. Each time we iterate through the withImplicitSegments
array, we sum the element with the total of the elements in the output
array. This can be collapsed to be less verbose, though possibly harder to interpret:
const segmentsToDividers2 = segments => {
// collapsed, but harder to understand
return [0, ...segments]
.map((value, index, values) =>
values
.slice(0, index + 1)
.reduce((accum, current) => accum + current)
)
.slice(1, -1);
}
This approach works the same as the previous method, but the summation of the preceding elements is performed by taking ever larger slices of the input array. This works, and accomplishes the same thing. But we can still achieve the same result without the added complexity. Let’s use the reduce
method a bit more pragmatically for this.
From MDN:
The reduce() method executes a reducer function (that you provide) on each member of the array resulting in a single output value.
So let’s provide a reducer function that returns an array containing the summation of each element and the total of the elements preceding it. That could look something like:
const segmentsToDividers3 = segments => {
return [0, ...segments]
.reduce(
(accum, current, index, src) =>
[...accum, current + accum[index - 1] || 0],
[]
)
.slice(1, -1);
}
Each iteration of the reducer
returns an array containing the summation up to that point. This is basically the same as in the previous example where we slice
the output
value in each iteration. Again, this results in the same value as the previous examples, but we get to the end result without intermediary variables or additional operations. Everything is contained in the reducer provided to reduce
.
The important extra bits are the handling of the invalid cases. The addition of || 0
to the reducer function means that if the result is invalid, use 0
instead. This prevents us from seeding the output with invalid data (as we would in the first iteration). The second parameter to reduce
is the initial value for the accumulation
array. Again, if this didn’t exist, we would be seeding the result with invalid data.
So here we have a few different ways to approach some simple array manipulation. I haven’t gone into the performance aspects of each variation; I just wanted to highlight how a procedural problem could be solved using a more functional technique. For me, visualizing the data and the transformation in a new way helps to better appreciate the features of a language or a programming style. So yeah, do that.