BFFT Techblog Mai: Divide and Conquer NURBS into Polylines
Topic: How to transform Non-Uniform Rational B-Splines (NURBS) into Polylines consisting of nodes and edges using an efficient Divide and Conquer approach.
Recently I worked with a large scale high precision road network which was represented entirely by millions of 3D NURBS geometries instead of the more commonly used polylines. Now polylines are easy to draw and there are a number of powerful tools out there for storage and spatial analysis of polylines, as well as tools for the creation of road maps. However my preferred spatial data tools (like most other spatial data tools) did not support NURBS. This lead me to the task to convert those millions of NURBS into polylines first, before I could apply my usual toolchain including storage of the network in a PostGIS database. Figure 1 illustrates the difference between a NURBS and a polyline.
I had three objectives:
- Implementation in Java
- Library or efficient algorithm for the evaluation of the given NURBS geometries
- Most importantly, no set number of points on the resulting polylines. For drawing and retrieving speed purposes I needed polylines which contained more points at segments with higher curvature (steep curves) and less points at segments with less curvature (rather straight lines).
Below I want to give an overview how I achieved my three objectives.
Figure 1: a) NURBS with control points (green) b) Polyline with nodes (orange)
First of all, our entire framework is based in Java. We are working with Big Data in the automotive environment and support the whole chain of collection, to storage, analysis and presentation of the data. This makes Java a natural choice. Secondly, there are great Java libraries to work with spatial data, like Geotools and JTS, which we regularly use. The downside of Java was, that except for some code here and there, I could not find any native NURBS library for Java. The closest is the C library TinySpline (https://github.com/msteinbeck/tinyspline) which has bindings for Java but did not suit my purposes, mainly because it would not have been easy to integrate into our native Java platform and it was based on de Boors algorithm for the calculation of the basis functions (I will go into that later). So finally, I decided to implement the algorithms myself in native code, which required the choice of an algorithm. This brings me to my next objective: to choose an efficient algorithm.
2 Efficient algorithm
Of course efficient algorithms is a common objective. In my case, roughly 10 Million NURBS geometries had to be converted to polylines and not only once, since there would be regular updates to the road network. So even minor performance improvements in the algorithms would significantly speed up the process.
Unfortunately this is not the right place to get into the math and amazing properties of Non-Uniform Rational B-Splines. However for the interested, Philip J. Schneider wrote an insightful introduction about the topic already back in 1996 in the course of working on QuickDraw 3D . For everything else you ever wanted to know about NURBS I can recommend The NURBS Book by Piegl and Tiller  who also supply pseudo code for some of their proposed algorithms. Nevertheless let me state at least the basic facts: A NURBS geometry is a parametric function which can represent any surface structure and still preserve mathematical correctness. A NURBS Q(t) is defined by following equation:
So a NURBS is mainly characterized by a set of n control points Bi. To determine the global and local influence of those control points we got a set of n basic functions Ni,k(t) and n weights wi respectively. The basic functions itself are calculated based on a so called knot vector, which is an ordered list of positive real numbers xi. k is the order of the basis function and defined as k = degree + 1.
Inspecting the formula, obviously the majority of the processing time would go into the evaluation of the basis functions for any given parametric value t. The most apparent implementation to calculate those functions would be to implement the recursion given in equation 1. This recursion is called Cox-de Bor recursion (henceforth called CBR) after the two bright scientists who independently discovered the equation back in the seventies. However, more efficient ways to calculate the basis functions have been proposed. This is since in the CBR many basis functions are evaluated over and over again and there are also cases which cause a division by 0. Piegl and Tiller  describe an approach to compute the functions from lower to higher order in contrast to the Cox-de Boor’s higher to lower order recursion. Hence this avoids duplication of calculation. Jankauskas  made performance tests on both (Cox-de Bor, Piegl & Tiller) and a couple of their own algorithms. They found that Piegl and Tillers algorithms (henceforth called PGA) performance yields a significant increase in computation speed. That’s because PGAs computing time rises linear to the order of the basis functions, compared to CBRs exponential rise. Already the evaluation of a set of 2nd order basis functions is 11 times faster using PGA. Another great performance effect is, that PGA only calculates those basis functions, which actually apply for a given parametric value t (Small reminder: A NURBS can only be influenced by maximum order k basis functions at any given value t). That made my own implementation of the evaluation of Q(t) (see equation 1) for a given value t easier, because I did not need to take care of this issue any more.
Finally, I used PGA to evaluate the basis functions and could shortcut the calculation of Q(t) a bit, since some basis functions had not to be considered depending on t.
3 No set number pf points on the resulting polylines
My last and most important objective was to generate polylines with a flexible number of nodes. Highways are rather straight and should need less nodes, European inner city streets are not and need more nodes. As stated above, I needed polylines which contained more points at segments with higher curvature (steep curves) and less points at segments with less curvature (rather straight lines) for drawing and retrieving speed purposes. Figure 2 gives you an idea of what I mean. Both polylines have 15 nodes, however in graph a) the nodes are distributed honouring curvature and the graph appears smoother, especially at the extrema.
Figure 2: Polyline with 15 nodes a) Nodes distributed to honour curvature b) Nodes distributed equally
Now the standard approach to draw a parametric function is to stepwise increase the parametric value t in its range of definition, so for t ∈ [a,b], t1 = a,t2 = a + ε,t3 = a + 2 *ε,…,tn = b and thus getting number-of-steps n nodes in the resulting polyline. A useful characteristic of NURBS is, that the stepwise approximation actually naturally plots more points at segments with higher curvature. This is due to the gradient of the basis functions (higher gradient → more distance between nodes for equidistant steps). However here we are always limited to a fixed amount of points (number of steps). Trouble is, it is not trivial to choose how many points a resulting polyline should contain, since we only know its very coarse shape (derived from the NURBS control points) and do not know its arc length. Even a sophisticated algorithm to determine the amount of necessary points, would not give you the stability and control over the result you might wish for. So I disregarded the stepwise process!
But now how to evaluate points at locations of higher curvature without having to stepwise evaluate the entire NURBS first? The ideal way would be, to evaluate only exactly those values t of a NURBS, which are necessary to give it a smooth appearance in the resulting polyline. Thus we would avoid unnecessary processing steps. And that’s actually exactly what I did, using a kind of divide and conquer approach to approximate the NURBS.
Divide and Conquer
How can one achieve a smooth curve representation? A good way is, to not allow two consecutive segments to connect at an angle smaller than a certain threshold. Like this, I would naturally get my nodes distribution as stated in my objective. By starting to approximate the curve with a minimum number of segments and then recursively divide each segment into smaller segments until two consecutive segments reach below the connection angle threshold, a small number of approximation steps would result in a smooth curve representation. I am aware of the fact, that this would not result in the optimal solution but it would be quite fast considering the fact, that a NURBS evaluation for every single point is quite expensive. Figure 3 shows an illustration of my divide and conquer approach for a NURBS with five control points. The initial step of the recursion starts out with a coarse polyline. The nodes are located roughly at the locations of the extremes of the curve, the turning points and the start and end nodes. That is a prerequisite in order to honour all extremes of the curve and not accidentally skip one of the extremes in the recursion process.
allDone ← true
for (i ← 1; i < number of points in polyline; i++) do
if (point(i) is marked as done) then
skip this iteration step
allDone ← false
tMid ← ½ (tValues(i) + tValues(i + 1))
midPoint ← evalNURBS(tMid)
if (angleBetween(midPoint,point(i),point(i + 1)) ≤ threshold) then
mark point(i) as done
if (angleBetween(midPoint,point(i + 1),point(i)) ≤ threshold) then
mark point(i + 1) as done
insert midPoint to polyline at location i
insert tMid to tValues at location i
if (allDone=true or recursionDepth > max) then
So given those nodes or Greville abscissae we can draw out the coarse shape of the polyline which would resample a zick-zack line through the extremes. Half way between the extremes (Meaning half way between the t values which represent the extremes) will sit the turning points, which I included in my initial curve as well (see figure 3 a) ). Next we divide and conquer the initial curve into a smooth representation of our NURBS curve with exactly number-of-polyline-nodes evaluation steps (no point is evaluated that is not inserted into the resulting polyline). For detailed steps see my pseudo code example explaining the recursion (algorithm 1). Initial input would be our zick-zack polyline and the according parametric t values.
One more remark, this approach would always be deterministic for NURBS with first-derivative continuity. If there is a discontinuity however (kinks, sudden change of direction like a right angle) the recursion would never stop. So even though my data did not contain discontinuous curves, I built in a stopper by defining a maximum recursion depth. I am sure there would be a better approach but that was not in my scope. Furthermore it might not be necessary to account for curvature below a certain distance between two consecutive nodes on the polyline. So I introduced a second tolerance parameter (next to the angle between two successive line segments). The parameter defines a threshold for the minimum distance between two consecutive nodes. I simply measure the distance between two sequential nodes before the next recursion step. If half this distance is below the threshold, I do not go deeper, even if my angle is still above the threshold.
I implemented Piegl and Tillers NURBS evaluation algorithm in Java to evaluate roughly 10 Million NURBS of a large scale road network, in order to create polylines. Most notably I used a divide and conquer algorithm to approximate the NURBS geometries in a way to honour segments with higher curvature more than segments with lower curvature, thus achieving a smooth curve with a minimum number of points in a minimum number of processing steps. In fact, every point that is evaluated on the NURBS curve will find its way to the final polyline, no additional calculations needed. The resulting road network of polylines can be drawn on a map on the fly, due to its minimum number of polyline nodes, even though it contains a vast amount of detail.
Figure 4: Example of conversion result at highway A9 Munich (roads based on Here HD maps, base image from Bing Arial)
Autor: Jörg M. (Entwicklung IT, Konzepte & Tooling)
Bildquellen: Jörg M.
Jobs bei BFFT: