Paul Khuong mostly on Lisp

Engineering a List Merge Sort

Back in November 2011, Takeru Ohta submitted a very nice patch to replace our (SBCL???s) in-place stable merge sort on linked lists with a simpler, much more efficient implementation. Last May, I finally whipped myself into running a bunch of tests to estimate the performance improvements and make sure there weren???t any serious regression, and finally committed the latest version of that implementation. This post summarises what happened as I tried to find further improvements. The result is an implementation that???s linear-time on nearly sorted or reverse-sorted lists, around 4 times as fast on slightly shuffled lists, and up to 30% faster on completely shuffled lists, thanks to design choices guided by statistically significant effects on performance (on one computer, a 2.8 GHz X5660).

I think the approach I used to choose the implementation could be ported to different contexts, and the tiny tweak to adapt the sort to nearly-sorted inputs is simple (more than detecting runs and merging balanced pairs), if a bit weak, and works with pretty much any merge sort.

A good starting point

The current code is reproduced below; the sort is parameterised on two functions: a comparator (test) and a key function to extract the property on which data are compared. The key function is often the identity, but having it available is more convenient than having to pull the calls into the comparator. The sort is also stable, so we use it for both stable and regular sorting; I???d like to keep it that way to minimise maintenance and testing overhead. This implementation seems like a good basis to me: it???s simple, and pretty good (both in runtime and in number of comparisons). Trying to modify already-complicated code isn???t fun, and there???s little point trying to improve an implementation that doesn???t get the basics right.

merge function
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
(defun merge-lists* (head list1 list2 test key &aux (tail head))
  (declare (type cons head list1 list2)
           (type function test key)
           (optimize speed))
  (macrolet ((merge-one (l1 l2)
               `(progn
                  (setf (cdr tail) ,l1
                        tail       ,l1)
                  (let ((rest (cdr ,l1)))
                    (cond (rest
                           (setf ,l1 rest))
                          (t
                           (setf (cdr ,l1) ,l2)
                           (return (cdr head))))))))
    (loop
     (if (funcall test (funcall key (car list2))  ; this way, equivalent
                       (funcall key (car list1))) ; values are first popped
         (merge-one list2 list1)                  ; from list1
         (merge-one list1 list2)))))
sort function
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
(defun stable-sort-list (list test key &aux (head (cons :head list)))
  (declare (type list list)
           (type function test key)
           (dynamic-extent head))
  (labels ((recur (list size)
             (declare (optimize speed)
                      (type cons list)
                      (type (and fixnum unsigned-byte) size))
             (if (= size 1)
                 (values list (shiftf (cdr list) nil))
                 (let ((half (ash size -1))) ; TRUNCATE would have worked
                   (multiple-value-bind (list1 rest)
                       (recur list half)
                     (multiple-value-bind (list2 rest)
                         (recur rest (- size half))
                       (values (merge-lists* head list1 list2 test key)
                               rest)))))))
    (when list
      (values (recur list (length list))))))

There are a few obvious improvements to try out: larger base cases, recognition of sorted subsequences, converting branches to conditional moves, finding some way to avoid the initial call to length (which must traverse the whole linked list), ??? But first, what would interesting performance metrics be, and on what inputs?

Brainstorming an experiment up

I think it works better to first determine our objective, then the inputs we consider, and, last, the algorithm variants to try and compare (decision variables). That???s the reverse order of what???s usually suggested when defining mathematical models. The difference is that, in our case, the space of inputs and algorithms is so large that we have to winnow them down by taking into account the objective and inputs.

Objectives

A priori, three basic performance metrics seem interesting: runtimes, number of calls to the comparator, and number of calls to the key functions. On further thought, the last one doesn???t seem useful: if it really matters, a schwartzian transform suffices to reduce these calls to a minimum, regardless of the sort implementation.

There are some complications when looking at runtimes. The universe of test and key functions is huge, and the sorts can be inlined, which sometimes enables further specialisation on the test and key. I???ve already decided that calls to key don???t matter directly. Let???s suppose it???s very simple, the identity function. The number of comparisons will correlate nicely with performance when comparisons are slow. Again, let???s suppose that the comparator is simple, a straight < of fixnums. The performance of sorts, especially with a trivial key and a simple comparator, can vary a lot depending on whether the sort is specialised or not, and both cases are relevant in practice. I???ll have to test for both cases: inlined comparator and key functions, and normal sorts with unknown functions.

This process lead to a set of three objective functions: the number of calls to the comparator, the runtime (number of cycles) of normal, generic sort, and the number of cycles for a specialised sort.

Inputs

The obvious thing to vary in the input (the list to sort) is the length of the list. The lengths should probably span a wide range of values, from short lists (e.g. 32 elements) to long ones (a few million elements). Programs that are sort-bound on very long lists should probably use vectors, if only around the sort, and then invest in a sophisticated sort.

In real programs, sort is sometimes called on nearly-sorted or reverse-sorted sequences, and it???s useful to sort such inputs faster, or with fewer comparisons. However, it???s probably not that interesting if the change results in worse performance on shuffled or slightly-shuffled lists. I decided to test on sorted and fully shuffled inputs. I also interpolated between the two reversing the order of randomly-selected subranges of the list a few times.

Finally, linked lists are different than vectors in one key manner: contiguous elements can be arbitrarily scattered around memory. SBCL???s allocation scheme ensures that consecutively-allocated objects will tend to be next to each other in memory as well, and the copying garbage collector is hacked to copy the spine of cons lists in order. However, a list can still temporarily exhibit bad locality, for example after an in-place sort. Again, I decided to go for ordered conses, fully scattered conses (only the conses were shuffled, not the values), and to interpolate, this time by swapping randomly-selected pairs of consecutive subranges a couple times.

Code tweaks

The textbook way to improve a recursive algorithm is to increase the base cases??? sizes. In the current code the base case is a sublist of size 1; such a list is trivially sorted. We can easily increase that to two (a single conditional swap suffices), and an optimal sorting network for three values is only slightly more complicated. I decided to stop there, with base cases of size one to three. These simple sorts are implemented as a series of conditional swaps (i.e. pairs of max/min computations), and these can be executed branch-free, with only conditional moves. There???s a bit more overhead, and conditional moves have more latency than correctly-predicted branches, but it might be useful with specialised sort on shuffled inputs, and otherwise not hurt too much.

The merge loop could cache the result of calls to the key functions. It won???t be useful in specialised sorts, and won???t affect the number of comparisons, but it???ll probably help the generic sort without really affecting performance otherwise.

With one more level of indirection, the merge loop can be branch-free: merge-one would be executed on references to the list heads, and these references can be swapped with conditional moves. Again, it???s not obvious that the additional complexity will be a win.

Like I hinted back in May, we can accelerate the sort on pre-sorted inputs by keeping track of the last cons in each list, and tweaking the merge function: if the first value in one list is greater than the last in the other, we can directly splice them in order. Stability means we have to add a bit of complexity to handle equal values correctly, but nothing major. With this tiny tweak, merge sort is linear-time on sorted or reverse-sorted lists (the recursive step is constant-time, and merge sort recurses on both halves); it also works on recursively-processed sublists, and the performance is thus improved on nearly-sorted inputs in general. There???s little point going through additional comparisons to accelerate the merger of two tiny lists; a minimal length check is in order. In addition to the current version, without any quick merge, I decided to try quick merges when the length of the two sublists summed to at least 8, 16 or 32.

Finally, I tried to see if the initial call to length (which has to traverse the whole list) could be avoided, e.g. by switching to a bottom-up sort. The benchmarks I ran in May made me realise that???s probably a bad idea. Such a merge sort probably has to split its inputs in chunks of power of two (or some other base) sizes. These splits are suboptimal on non-power-of-two inputs; for example, when sorting a list of length (1+ (ash 1 n)), the final merge is between a list of length (ash 1 n) and a list of length ??? one. Knowing the exact length of the list means we can split recursive calls optimally, and that eliminates bumps in runtimes and in the number of comparisons around ???round??? lengths.

How can we compare all these possibilities ?

I usually don???t try to do anything clever, and simply run a large number of repetitions for all the possible implementations and families of inputs, and then choose a few interesting statistical tests or sic an ANOVA on it. The problem is, I???d maybe want to test with ten lengths (to span the wide range between 32 and a couple million), a couple shuffled-ness values (say, four, between sorted and shuffled), a couple scattered-ness values (say, four, again), and around 48 implementations (size-1 base case, size-3 base case, size-3 base case with conditional moves, times cached or uncached key, times branchful or branch-free merge loop, times four possibilities for the quick merge). That???s a total of 7680 sets of parameter values. If I repeated each scenario 100 times, a reasonable sample sizes, I???d have to wait around 200 hours, assuming an average time of 1 second/execution (a realistic estimate, given how slow shuffling and scattering the lists can be)??? and I???d have to do that separately when testing for comparison counts, generic sort runtimes and specialised sort runtimes!

I like working on SBCL, but not enough to give its merge sort multiple CPU-weeks.

Executing multiple repetitions of the full cross-product is overkill: it gives us enough information to extract information about the interaction between arbitrary pairs (or arbitrary subsets, in fact) of parameters (e.g. shuffled-ness and the minimum length at which we try to merge in constant-time). The thing is, I???ll never even try to interpret all these crossed effects: there???s way too many pairs, triplets, etc. I could try to determine interesting crosses ahead of time, and find a design that fits my simple needs.

Increasing the length of the list will lead to longer runtimes and more comparisons. Scattering the cons cells around will also slow the sorts down, particularly on long lists. Hopefully, the sorts are similar enough that they???ll be affected comparably by the length of the list and how its conses are scattered in memory.

Shuffled lists should be slower to sort than pre-sorted ones, even without any clever merge step: all the branches conditional on a comparison are trivially predicted. Hopefully, the effect is more marked when sorted pairs of sublists are merged in constant time.

Finally, the interaction between the remaining algorithmic tweaks is pretty hard to guess, and there are only 12 combinations. I find it reasonable to cross the three parameters.

That???s three sets of crossed effects (length and scatteredness, shuffledness and quick merge switch-over, remaining algorithmic tweaks), but I???m not interested in any further interaction, and am actually hoping these interactions are negligible. A latin square design can help bring the sample to a much more reasonable size.

Quadrata latina pro victoria

An NxN latin square is an square of NxN cells, with one of N symbols in each cell, with the constraint that each symbol appears once in each row and column; it???s a relaxed sudokus.

When a first set of parameters values is associated with the rows, a second with the columns, and a third with the symbols, a latin square defines N2 triplets that cover each pair of parameters between the three sets exactly once. As long as interactions are absent or negligible, that???s enough information to separate the effect of each set of parameters. The approach is interesting because there are only N2 cells (i.e. trials), instead of N3, and the design works with very few repetitions, even with a single measurement per cell.

Latin squares are also fairly easy to generate. It suffices to fill the first column with the symbols, in arbitrary order, the second with the same order, rotated by one position, the third with a rotation by two, etc. The square can be further randomised by shuffling the rows and columns (with Fisher-Yates, for example). That procedure doesn???t sample from the full universe of latin squares, but it???s supposed to be good enough to uncover pairwise interactions.

Latin squares only make sense when all three sets of parameters are the same size. Latin rectangles can be used when one of the sets is smaller than the two others, by simply removing rows or columns from a random latin square. Some pairs are then left unexplored, but the data still suffices for uncrossed linear fits, and generating independent rectangles help.

I???ll treat all the variables as categorical, even though they take numeric values: it???ll work better for non-linear effects, and I???m not sure what functional form I should use for the effect of list size and cell scattering, especially when considering runtimes.

Optimising for comparison counts

Comparison counts are easier to analyse. There???s no need to worry about micro-optimisation issues like conditional moves or scattered conses, and results are deterministic for fixed inputs. That means there are much fewer possibilities to consider.

Four values for the minimum length before checking for constant-time merger (8, 16, 32 or never), and ten shuffled-ness values (sorted, one, two, five, ten, 50, 100, 500 or 1000 flips, and full shuffle) seem reasonable; when the number of slips is equal to or exceeds the list length, a full shuffle is performed instead. That???s 40 values for one parameter set.

There are only two interesting values for the remaining algorithmic tweaks: size-3 base cases or not (only size-1).

This means there should be 40 list lengths to balance the design. I chose to interpolate from 16 to 16M (inclusively) with a geometric sequence, rounded to the nearest integer.

The resulting latin rectangles comprise 80 cells. Each scenario was repeated five times (starting from the same five PRNG states), and 30 independent rectangles were generated. In total, that???s 12 000 executions. The are probably smarter ways to do this and better exploit the fact that there are only two algorithmic tweaks variants; I stuck to a very thin latin rectangle to stay closer to the next two settings. Still, a full cross-product with 100 repetitions would have called for 320 000 executions, nearly 30 times as many.

I wish to understand the effect of these various parameters on the number of times the comparison function is called to sort a list. Simple models tend to suppose additive effects. That doesn???t look like it???d work well here. I expect multiplicative effects: enabling quick merge shouldn???t add or subtract to the number of comparisons, but scale it (hopefully by less than one). A transformation by log will convert these multiplications into additions. ANOVA methods and the linear regression I???ll use are parametric methods, and suppose that the experimental noise roughly follows a normal distribution. That seems reasonable: variations will be caused by a sum of many small differences caused by the shuffling, and we???re working with many repetitions, hopefully enough for the central limit theorem to kick in.

The latin square method also depends on the absence of crossed interactions between rows and columns, rows and symbols, or columns and symbols. If that constraint is violated, the design is vulnerable to Type I errors: variations caused by interactions between rows and columns, e.g., could be assigned to rows or columns.

My first step is to look for such interaction effects.

two-way ANOVA for comparison counts
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
> anova(lm(log(Count, 2) ~ Size.Scatter*Shuffle.Quick
                         + Size.Scatter*Leaf.Cache.BranchMerge
                         + Shuffle.Quick*Leaf.Cache.BranchMerge
                         + 0, # 0 y-intercept
           data))
Analysis of Variance Table

Response: log(Count, 2)
                                        Df  Sum Sq Mean Sq  F value  Pr(>F)
Size.Scatter                            40 3935314   98383 6.90e+06 < 2e-16 ***
Shuffle.Quick                           39    7739     198 1.39e+04 < 2e-16 ***
Leaf.Cache.BranchMerge                   1       8       8 5.96e+02 < 2e-16 ***
Size.Scatter:Shuffle.Quick            1159     770       1 4.66e+01 < 2e-16 ***
Size.Scatter:Leaf.Cache.BranchMerge     39       1       0 1.03e+00    0.41
Shuffle.Quick:Leaf.Cache.BranchMerge    39       1       0 2.10e+00 7.5e-05 ***
Residuals                            10683     152       0
---
Signif. codes:  0 ???***??? 0.001 ???**??? 0.01 ???*??? 0.05 ???.??? 0.1 ??? ??? 1

The main effects are statistically significant (in order, list length, shuffling and quick merge limit, and the algorithmic tweaks), with p < 2e-16. That???s reassuring: the odds of observing such results if they had no effects are negligible. Two of the pairs are, as well. Their effects, on the other hand, don???t seem meaningful. The Sum Sq column reports how much of the variance in the data set is explained when the parameters corresponding to each row (one for each degree of freedom Df) are introduced in the fit. Only the Size.Scatter:Shuffle.Quick row really improves the fit, and that???s with 1159 degrees of freedom. The mean improvement in fit, Mean Sq (per degree of freedom) is small.

The additional assumption that interaction effects are negligible seems reasonably satisfied. The linear model should be valid, but, more importantly, we can analyse each set of parameters independently. Let???s look at the regression with only the main effects.

one-way ANOVA for comparison counts
1
2
3
4
5
6
7
8
9
10
11
12
13
> fit <- lm(log(Count, 2) ~ Size.Scatter + Shuffle.Quick
                          + Leaf.Cache.BranchMerge + 0, data)
> anova(fit)
Analysis of Variance Table

Response: log(Count, 2)
                          Df  Sum Sq Mean Sq F value Pr(>F)
Size.Scatter              40 3935314   98383 1269019 <2e-16 ***
Shuffle.Quick             39    7739     198    2560 <2e-16 ***
Leaf.Cache.BranchMerge     1       8       8     110 <2e-16 ***
Residuals              11920     924       0
---
Signif. codes:  0 ???***??? 0.001 ???**??? 0.01 ???*??? 0.05 ???.??? 0.1 ??? ??? 1

The fit is only slightly worse than with pairwise interactions. The coefficient table follows. What we see is that half of the observations fall within 12% of the linear model???s prediction (the worst case is off by more than 100%), and that nearly all the coefficients are statistically significantly different than zero.

one-way coefficients for comparison counts
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
> summary(fit)
Call:
lm(formula = log(Count, 2) ~ Size.Scatter + Shuffle.Quick
                           + Leaf.Cache.BranchMerge + 0,
   data = data)

Residuals:
    Min      1Q  Median      3Q     Max
-1.1105 -0.1746  0.0052  0.1440  1.3586

Coefficients:
                            Estimate Std. Error t value Pr(>|t|)
Size.Scatter32xF             7.20776    0.02279  316.23  < 2e-16 ***
Size.Scatter45xF             7.98704    0.02276  350.87  < 2e-16 ***
Size.Scatter63xF             8.52335    0.02243  380.06  < 2e-16 ***
[...]
Shuffle.QuickFx8            -2.77266    0.02284 -121.39  < 2e-16 ***
Shuffle.QuickFx16           -2.45345    0.02293 -106.98  < 2e-16 ***
Shuffle.QuickFx32           -2.14230    0.02289  -93.59  < 2e-16 ***
Shuffle.QuickFxF            -0.55217    0.02289  -24.12  < 2e-16 ***
[...]
Leaf.Cache.BranchMergeTxFxT  0.05320    0.00508   10.47  < 2e-16 ***
---
Signif. codes:  0 ???***??? 0.001 ???**??? 0.01 ???*??? 0.05 ???.??? 0.1 ??? ??? 1

Residual standard error: 0.278 on 11920 degrees of freedom
Multiple R-squared:    1,       Adjusted R-squared:    1
F-statistic: 6.36e+05 on 80 and 11920 DF,  p-value: <2e-16

The Size.Scatter coefficients are plotted below. The number of comparison grows with the length of the lists (these are base-2 log counts, and the progression is nearly linear, following the lengths??? geometric growth). The logarithmic factor effect???s shows in the curve???s slight convexity (compare to the linear interpolation in blue). Scattering conses in memory wouldn???t affect the number of comparisons, so the list was always kept consecutively laid out.

The Shuffle.Quick values are the coefficients for the crossed effect of the level of shuffling and the minimum length (cutoff) at which constant-time merge may be executed; their values are reported in the next histogram, with error bars corresponding to one standard deviation. Hopefully, a shorter cutoff lowers the number of comparisons when lists are nearly presorted, and doesn???t increase it too much when lists are fully shuffled. On very nearly sorted lists, looking for presorted inputs as soon as eight or more values are merged divides the number of comparisons by a factor of 4 (these are base-2 logarithms), and the advantage smoothly tails off as lists are shuffled better. Overall, cutting off at eight seems to never do substantially worse than the other choices, and is even roughly equivalent to vanilla merges on fully-shuffled inputs.

The coefficient table tells us that nearly all of the Shuffle.Quick coefficients are statistically significant. It???s important to remember that the statistical significance values are for a null hypothesis that each of these coefficients is actually zero. The measures I observed would be extremely unlikely if that were the case, but that test tells us nothing about the relationship between two coefficients. The standard deviations help us detect hugely significant difference, but we can use statistical tests to try and make finer distinctions. Tukey???s Honest Significant Difference method let us derive intervals for the difference between two coefficients for given confidence levels. For example, the 99.99% confidence interval between cutoff at 8 and 32 on lists that were swapped 50 times is [-0.245, -0.00553]. This result means that, if the hypotheses for Tukey???s HSD method are satisfied, the probability of observing the results I found is less than .01% when the actual difference in effect between cutoff at 8 and 32 is outside that interval. Since even the upper bound is negative, it also means that the odds of observing the current results are less than .01% if the real value for cutoff at 8 isn???t lower than that of cutoff at 32. In other words, we???re pretty sure that looking for quick merges as early as length eight pays off compared to only doing so for merges of length 32 or more. Or, we could also try and prove that???s the case. Overall, cutting off at length eight results in fewer comparisons than the other options at nearly all shuffling levels (with very high confidence), and the few cases cases it doesn???t aren???t statistically significant at a 99.99% confidence level ??? then again, absence of evidence isn???t evidence of absence, but the estimates differences themselves tend to be tiny anyway.

The last row, Leaf.Cache.BranchMergeTxFxT, reports the effect of adding base cases that sort lists of length 2 and 3. Doing so causes 4% more comparisons. That???s a bit surprising: adding specialised base cases usually improves performance. The issue is that the sorting networks are only optimal for data-oblivious executions. Sorting a three values requires, in theory, 2.58 (lg3!) bits of information (comparisons). A sorting network can???t do better than the ceiling of that, 3 comparisons, but if control flow can depend on the comparisons, some lists can be sorted in two comparisons.

It seems that, if we wish to minimise the number of comparisons, I should avoid sorting networks for the size-3 base case, and try to detect opportunities for constant-time list merges. Doing so as soon as the merged list will be of length eight or more seems best, but the difference with higher switch-over values seems marginal.

Optimising the runtime of generic sorts

I decided to keep the same general shape of 40x40xM parameter values when looking at the cycle count for generic sorts. This time, scattering conses around in memory will affect the results. I went with conses laid out linearly, 10 swaps, 50 swaps, and full randomisation of addresses. These 4 scattering values leave 10 list lengths, in a geometric progression from 32 to 16M. Now, it makes sense to try all the other micro-optimisations: trivial base case or base cases of size up to 3, with branches or conditional moves (3 choices), cached calls to key during merge (2 choices), and branches or conditional moves in the merge loop (2 choices). This calls for latin rectangles of size 40x12; I generated 10 rectangles, and repeated each cell 5 times (starting from the same 5 PRNG seeds). In total, that???s 24 000 executions. A full cross-product, without any repetition, would require 19 200 executions; the latin square design easily saved a factor of 10 in terms of sample size for equivalent power.

I???m interested in execution times, so I generated the inputs ahead of time, before sorting them; during both generation and sorting, the garbage collector was disabled to avoid major slowdowns caused by the mprotect-based write barrier.

Again, I have to apply a logarithmic transformation for the additive model to make sense, and first look at the interaction effects. There???s a similar situation as the previous section on comparison counts: one of the crossed effects is statistically significant, but it???s not overly meaningful, especially in terms of interpretation. A quick look at the coefficients reveals that the speed-ups caused by treating nearly-sorted lists in close to linear time are overestimated on short lists and slightly underestimated on long ones.

two-way ANOVA for generic sort runtime
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
> anova(lm(log(Cycles, 2) ~ Size.Scatter*Shuffle.Quick
                          + Size.Scatter*Leaf.Cache.BranchMerge
                          + Shuffle.Quick*Leaf.Cache.BranchMerge
                          + 0,
           data))
Analysis of Variance Table

Response: log(Cycles, 2)
                                        Df   Sum Sq Mean Sq  F value Pr(>F)    
Size.Scatter                            40 15666210  391655 3.97e+07 <2e-16 ***
Shuffle.Quick                           39    13396     343 3.48e+04 <2e-16 ***
Leaf.Cache.BranchMerge                  11       58       5 5.37e+02 <2e-16 ***
Size.Scatter:Shuffle.Quick            1477     1620       1 1.11e+02 <2e-16 ***
Size.Scatter:Leaf.Cache.BranchMerge    429        4       0 8.90e-01   0.95    
Shuffle.Quick:Leaf.Cache.BranchMerge   429        2       0 5.40e-01   1.00    
Residuals                            21575      213       0                    
---
Signif. codes:  0 ???***??? 0.001 ???**??? 0.01 ???*??? 0.05 ???.??? 0.1 ??? ??? 1

We can basically read the ANOVA with only main effects by skipping the rows corresponding to crossed effects and instead adding their values to the residuals. There are statistically significant coefficients in there, and they???re reported below. Again, I???m quite happy to be able to examine each set of parameters independently, rather than having to understand how, e.g., scattering cons cells around affect quick merges differently than the vanilla merge. Of course, absence of evidence isn???t evidence of absence; I???m just trying to do my best with induction.

one-way coefficients for generic sort runtime
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
> summary(lm(log(Cycles, 2) ~ Size.Scatter + Shuffle.Quick
                            + Leaf.Cache.BranchMerge + 0, data))
Call:
lm(formula = log(Cycles, 2) ~ Size.Scatter + Shuffle.Quick + 
    Leaf.Cache.BranchMerge + 0, data = data)

Residuals:
    Min      1Q  Median      3Q     Max 
-0.9658 -0.1637 -0.0036  0.1446  1.4650 

Coefficients:
                               Estimate Std. Error t value Pr(>|t|)    
Size.Scatter32x10              14.96752    0.01684  888.96  < 2e-16 ***
Size.Scatter32x500             15.01394    0.01677  895.22  < 2e-16 ***
Size.Scatter32xF               15.05236    0.01703  884.05  < 2e-16 ***
Size.Scatter32xT               15.03750    0.01702  883.43  < 2e-16 ***
Size.Scatter138x10             17.44223    0.01703 1024.09  < 2e-16 ***
Size.Scatter138x500            17.45624    0.01696 1029.29  < 2e-16 ***
Size.Scatter138xF              17.45541    0.01696 1029.31  < 2e-16 ***
Size.Scatter138xT              17.48772    0.01710 1022.69  < 2e-16 ***
[...]
Shuffle.QuickFx8               -2.52552    0.01607 -157.17  < 2e-16 ***
Shuffle.QuickFx16              -2.20712    0.01606 -137.42  < 2e-16 ***
Shuffle.QuickFx32              -1.92409    0.01606 -119.81  < 2e-16 ***
Shuffle.QuickFxF               -0.58187    0.01606  -36.24  < 2e-16 ***
Shuffle.Quick1x8               -1.84568    0.01607 -114.86  < 2e-16 ***
Shuffle.Quick1x16              -1.64170    0.01607 -102.19  < 2e-16 ***
Shuffle.Quick1x32              -1.50046    0.01606  -93.42  < 2e-16 ***
Shuffle.Quick1xF               -0.48014    0.01607  -29.87  < 2e-16 ***
[...]
Leaf.Cache.BranchMergeCMOVxFxT -0.03229    0.00877   -3.68  0.00023 ***
Leaf.Cache.BranchMergeCMOVxTxF -0.04451    0.00877   -5.08  3.9e-07 ***
Leaf.Cache.BranchMergeCMOVxTxT -0.08632    0.00877   -9.84  < 2e-16 ***
Leaf.Cache.BranchMergeFxFxF     0.09094    0.00877   10.37  < 2e-16 ***
Leaf.Cache.BranchMergeFxFxT    -0.01637    0.00877   -1.87  0.06193 .  
Leaf.Cache.BranchMergeFxTxF     0.01617    0.00877    1.84  0.06519 .  
Leaf.Cache.BranchMergeFxTxT    -0.06236    0.00877   -7.11  1.2e-12 ***
Leaf.Cache.BranchMergeTxFxF     0.03694    0.00877    4.21  2.5e-05 ***
Leaf.Cache.BranchMergeTxFxT    -0.03665    0.00877   -4.18  2.9e-05 ***
Leaf.Cache.BranchMergeTxTxF    -0.03920    0.00877   -4.47  7.9e-06 ***
Leaf.Cache.BranchMergeTxTxT    -0.08567    0.00877   -9.77  < 2e-16 ***
---
Signif. codes:  0 ???***??? 0.001 ???**??? 0.01 ???*??? 0.05 ???.??? 0.1 ??? ??? 1 

Residual standard error: 0.277 on 23910 degrees of freedom
Multiple R-squared:    1, Adjusted R-squared:    1 
F-statistic: 2.27e+06 on 90 and 23910 DF,  p-value: <2e-16

The coefficients for list length crossed with scattering level are plotted below. Sorting seems to be slower on longer list, especially when the cons cells are scattered; sorting long scattered lists is about twice as slow as sorting nicely laid-out lists of the same length. The difference between linear and slightly scattered lists isn???t statistically significant.

Just as with comparison counts, sorting pre-sorted lists is faster, even without special logic. Looking for sorted inputs before merging pays off even on short lists, when the input is nearly sorted: the effect of looking for presorted inputs even on sublists of length eight is consistently more negative (i.e. reduces runtimes) than for the other cutoffs. The difference is statistically significant at nearly all shuffling levels, and never significantly positive.

Finally, the three algorithmic tweaks. Interestingly, the coefficients tell us that, overall, the additional overhead of the branch-free merge loop slows it down by 5%. The fastest combination seems to be larger base cases, with or without conditional moves (C or T), cached calls to key (T), and branch-ful merge loop (T); the differences are statistically significant against nearly all other combinations, except FxTxT (no leaf sort, cached key, and branch-ful merge loop). Compared with the current code (FxFxT), the speed up is on the order of 5%, and at least 2% with 99.99% confidence.

If I want to improve the performance of generic sorts, it looks like I want to test for pre-sorted inputs when merging into a list of length 8 or more, probably implement larger base cases, cache calls to the key function in merge, and keep the branch-ful merge loop.

Optimising the runtime of specialised sorts

I kept the exact same plan as the previous section on generic sorts. The only difference is that independent latin rectangles were re-generated from scratch. With the overhead from generic, indirect calls, removed, I???m hoping to see more important effects from the micro-optimisations.

two-way ANOVA for specialised sort runtime
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
> anova(lm(log(Cycles, 2) ~ Size.Scatter*Shuffle.Quick
                          + Size.Scatter*Leaf.Cache.BranchMerge
                          + Shuffle.Quick*Leaf.Cache.BranchMerge
                          + 0,
           data))
Analysis of Variance Table

Response: log(Cycles, 2)
                                        Df   Sum Sq Mean Sq  F value Pr(>F)    
Size.Scatter                            40 12531049  313276 2.92e+07 <2e-16 ***
Shuffle.Quick                           39    12324     316 2.94e+04 <2e-16 ***
Leaf.Cache.BranchMerge                  11     3365     306 2.85e+04 <2e-16 ***
Size.Scatter:Shuffle.Quick            1475     2952       2 1.86e+02 <2e-16 ***
Size.Scatter:Leaf.Cache.BranchMerge    429      391       1 8.49e+01 <2e-16 ***
Shuffle.Quick:Leaf.Cache.BranchMerge   429      150       0 3.25e+01 <2e-16 ***
Residuals                            21577      232       0                    
---
Signif. codes:  0 ???***??? 0.001 ???**??? 0.01 ???*??? 0.05 ???.??? 0.1 ??? ??? 1 
Analysis of Variance Table

Here as well, all the main and crossed effects are statistically significant; it???s also even more clear that the crossed effects are much less important than the main ones, and that it???s probably not too bad to ignore the former.

one-way coefficients for specialised sort runtime
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
> summary(lm(log(Cycles, 2) ~ Size.Scatter + Shuffle.Quick
                            + Leaf.Cache.BranchMerge + 0, data))

Call:
lm(formula = log(Cycles, 2) ~ Size.Scatter + Shuffle.Quick + 
    Leaf.Cache.BranchMerge + 0, data = data)

Residuals:
    Min      1Q  Median      3Q     Max 
-1.5650 -0.2395 -0.0186  0.2233  2.1683 

Coefficients:
                               Estimate Std. Error t value Pr(>|t|)    
Size.Scatter32x10              12.60101    0.02431  518.30  < 2e-16 ***
Size.Scatter32x500             12.65034    0.02414  523.98  < 2e-16 ***
Size.Scatter32xF               12.57545    0.02431  517.19  < 2e-16 ***
Size.Scatter32xT               12.64220    0.02423  521.78  < 2e-16 ***
Size.Scatter138x10             14.76927    0.02422  609.70  < 2e-16 ***
Size.Scatter138x500            14.81878    0.02433  609.04  < 2e-16 ***
Size.Scatter138xF              14.76646    0.02440  605.16  < 2e-16 ***
Size.Scatter138xT              14.86926    0.02441  609.04  < 2e-16 ***
[...]
Shuffle.QuickFx8               -2.13508    0.02287  -93.37  < 2e-16 ***
Shuffle.QuickFx16              -1.87832    0.02285  -82.19  < 2e-16 ***
Shuffle.QuickFx32              -1.71007    0.02284  -74.86  < 2e-16 ***
Shuffle.QuickFxF               -0.64569    0.02286  -28.24  < 2e-16 ***
Shuffle.Quick1x8               -1.60291    0.02286  -70.12  < 2e-16 ***
Shuffle.Quick1x16              -1.48993    0.02284  -65.22  < 2e-16 ***
Shuffle.Quick1x32              -1.39072    0.02282  -60.95  < 2e-16 ***
Shuffle.Quick1xF               -0.55361    0.02284  -24.24  < 2e-16 ***
[...]
Leaf.Cache.BranchMergeCMOVxFxT -0.64106    0.01248  -51.37  < 2e-16 ***
Leaf.Cache.BranchMergeCMOVxTxF  0.02932    0.01248    2.35   0.0188 *  
Leaf.Cache.BranchMergeCMOVxTxT -0.65298    0.01248  -52.32  < 2e-16 ***
Leaf.Cache.BranchMergeFxFxF     0.30856    0.01248   24.72  < 2e-16 ***
Leaf.Cache.BranchMergeFxFxT    -0.43997    0.01248  -35.25  < 2e-16 ***
Leaf.Cache.BranchMergeFxTxF     0.29306    0.01248   23.48  < 2e-16 ***
Leaf.Cache.BranchMergeFxTxT    -0.44372    0.01248  -35.55  < 2e-16 ***
Leaf.Cache.BranchMergeTxFxF     0.01703    0.01248    1.36   0.1725    
Leaf.Cache.BranchMergeTxFxT    -0.70801    0.01248  -56.73  < 2e-16 ***
Leaf.Cache.BranchMergeTxTxF     0.02323    0.01248    1.86   0.0627 .  
Leaf.Cache.BranchMergeTxTxT    -0.68746    0.01248  -55.08  < 2e-16 ***
---
Signif. codes:  0 ???***??? 0.001 ???**??? 0.01 ???*??? 0.05 ???.??? 0.1 ??? ??? 1 

Residual standard error: 0.395 on 23910 degrees of freedom
Multiple R-squared:    1, Adjusted R-squared:    1 
F-statistic: 8.95e+05 on 90 and 23910 DF,  p-value: <2e-16

The general aspect of the coefficients is pretty much the same as for generic sorts, except that differences are amplified now that the overhead of indirect calls is eliminated.

The coefficients for crossed list length and scattering level coefficients are plotted below. The graph shows that fully shuffling long lists around slows sort down by a factor of 8; the initial check for crossed effect gave good reasons to believe that the effect is fairly homogeneous throughout all implementations.

Checking for sorted inputs before merge still helps, even on short lists (of length 8 or more). In fact even on completely shuffled lists, looking for quick merge on short lists very probably accelerates the sort compared to not looking for presorted inputs, while the speed up compared to other quick merge cutoffs isn???t significant to a 99.99% confidence level.

The key function is the identity, and is inlined into nothing in these measurements. It???s not surprising that the difference between cached and uncached key values is tiny. The versions with branchful leaf sorts, cached keys and branchful merge are quicker than the others at 99.99% confidence level, except the same without cached keys or with conditional moves in leaf sorts (and that slowdown doesn???t seem statistically significant).

When the sort is specialised, I probably want to use a merge function that checks for pre-sorted inputs very early, to implement larger base cases (with conditional moves or branches), and to keep the merge loop branchful.

Putting it all together

Comparison counts are minimised by avoiding sorting networks, and by enabling opportunistic constant-time merges as early as possible. Generic sorts are fastest with larger base cases (with or without branches), cached calls to the key function, a branch-ful merge loop and early checks for constant-time merges. Specialised sorts are, similarly, fastest with larger base cases, a branch-ful merge loop and early checks when merging (without positive or negative effect from caching calls to the key function, even if it???s the identity).

Overall, these result point me toward one implementation: branch-ful size-2 and size-3 base cases that let me avoid redundant comparisons, cached calls to the key function, branch-ful merge loop, and checks for constant-time merges when the result is of length eight or more.

The compound effect of these choices is linear time complexity or sorted inputs, speed-ups (and reduction in comparison counts) by factors of 2 to 4 on nearly-sorted inputs, and by 5% to 30% on fully shuffled lists.

The resulting code follows.

new merge function
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
(defun merge-lists* (head list1 list2 test key &aux (tail head))
  (declare (type cons head list1 list2)
           (type function test key)
           (optimize speed))
  (let ((key1 (funcall key (car list1)))
        (key2 (funcall key (car list2))))
    (macrolet ((merge-one (l1 k1 l2)
                 `(progn
                    (setf (cdr tail) ,l1
                          tail       ,l1)
                    (let ((rest (cdr ,l1)))
                      (cond (rest
                             (setf ,l1 rest
                                   ,k1 (funcall key (first rest))))
                            (t
                             (setf (cdr ,l1) ,l2)
                             (return (cdr head))))))))
      (loop
       (if (funcall test key2           ; this way, equivalent
                         key1)          ; values are first popped
           (merge-one list2 key2 list1) ; from list1
           (merge-one list1 key1 list2))))))
size-specialised sort functions
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
(defun stable-sort-list-2 (list test key)
  (declare (type cons list)
           (type function test key))
  (let ((second (cdr list)))
    (declare (type cons second))
    (when (funcall test (funcall key (car second))
                   (funcall key (car list)))
      (rotatef (car list) (car second)))
    (values list second (shiftf (cdr second) nil))))

(defun stable-sort-list-3 (list test key)
  (declare (type cons list)
           (type function test key))
  (let* ((second (cdr list))
         (third  (cdr second))
         (x (car list))
         (y (car second))
         (z (car third)))
    (declare (type cons second third))
    (when (funcall test (funcall key y)
                   (funcall key x))
      (rotatef x y))
    (let ((key-z (funcall key x)))
      (when (funcall test key-z
                     (funcall key y))
        (if (funcall test key-z
                     (funcall key x))
            (rotatef x z y)
            (rotatef z y))))
    (setf (car list)   x
          (car second) y
          (car third)  z)
    (values list third (shiftf (cdr third) nil))))
new sort function
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
(defconstant +fast-merge-limit+ 8)

(defun stable-sort-list (list test key &aux (head (cons :head list)))
  (declare (type list list)
           (type function test key)
           (dynamic-extent head))
  (labels ((merge* (size list1 tail1 list2 tail2 rest)
             (when (>= size +fast-merge-limit+)
               (cond ((not (funcall test (funcall key (car list2))   ; stability
                                         (funcall key (car tail1)))) ; trickery
                      (setf (cdr tail1) list2)
                      (return-from merge* (values list1 tail2 rest)))
                     ((funcall test (funcall key (car tail2))
                                    (funcall key (car list1)))
                      (setf (cdr tail2) list1)
                      (return-from merge* (values list2 tail1 rest)))))
               (values (merge-lists* head list1 list2 test key)
                       (if (null (cdr tail1))
                           tail1
                           tail2)
                       rest))
           (recur (list size)
             (declare (optimize speed)
                      (type cons list)
                      (type (and fixnum unsigned-byte) size))
             (cond ((> size 3)
                    (let ((half (ash size -1)))
                      (multiple-value-bind (list1 tail1 rest)
                          (recur list half)
                        (multiple-value-bind (list2 tail2 rest)
                            (recur rest (- size half))
                          (merge* size list1 tail1 list2 tail2 rest)))))
                   ((= size 3)
                    (stable-sort-list-3 list))
                   ((= size 2)
                    (stable-sort-list-2 list))
                   (t ; (= size 1)
                    (values list list (shiftf (cdr list) nil))))))
    (when list
      (values (recur list (length list))))))

It???s somewhat longer than the original, but not much more complicated: the extra code comes mostly from the tedious but simple leaf sorts. Particularly satisfying is the absence of conditional move hack: SBCL only recognizes trivial forms, and only on X86 or X86-64, so these hacks tend to be ugly and sometimes sacrifice performance on other platforms. SBCL???s bad support for conditional moves may explain the lack of any speed up from converting branches to select expressions: the conditional swaps had to be implemented as pairs of independent test with T/NIL and conditional moves. Worse, when the comparison is inlined, an additional conditional move converted the result of the integer comparison to a boolean value; in total, three pairs of comparison/conditional move were then executed instead of one comparison and two conditional moves. Previous work [PDF] on out-of-place merge sort of arrays in C found it useful to switch to a conditional move-based merge loop and sorting networks. Some of the difference is probably caused by SBCL???s weaker code generation, but the additional overhead inherent to linked list manipulations (compared to linear accesses in arrays) may also play a part.

Another code generation issue is caused by the way the initial version called the comparison function in exactly one place. This meant that arbitrary comparators would almost always be inlined in the specialised sort???s single call site. We lose that property with accelerated merge and larger base cases. It doesn???t seem too much of an issue: functions can be declared inline explicitly, and the key function was already called from multiple sites.

I???m a bit surprised that neither of the comparison-dependent branches were appreciably sped-up by converting them to conditional moves. I???m a lot more surprised by the fact that it pays off to try and detect pre-sorted lists even on tiny merges, and even when the comparator is inlined. The statistical tests were useful here, with results that defy my initial expectations and let me keep the code simpler. I would be pleasantly surprised if complex performance improvement patches, in SBCL and otherwise, went through similar testing. Code is a long-term liability, and we ought to be convinced the additional complexity is worth the trouble.

Independently of that, the latin square design was very helpful: it easily saved me a couple CPU-weeks, and I can see myself using it regularly in the future. The approach only works if we have a rough (and simple) performance model, but I have a hard time interpreting complex models with hundreds of interacting parameters anyway. Between a simplistic, but still useful, model and a complex one with a much stronger fit, I???ll usually choose the former??? as long as I can be fairly certain the simple model isn???t showing me a mirage.

Comments

We were unable to load Disqus. To resolve this please see our troubleshooting guide. We encountered the following error(s):
  • Could not connect to URL: "http://www.pvk.ca/Blog/2012/08/09/engineering-a-list-merge-sort/" (error was "HTTP Error 404: Not Found")