Profiling Distance Functions for Geodesic Search
Computing distance on road network is not as straightforward as on a map. The Earth is actually an ellipsoid and so distance should be computed on a sphere rather than on a plane. Accounting for all the ways the Earth is neither a plane nor a sphere can be computationally expensive. I recently implemented a choice few geodesic distance function in warthog, a pathfinding library.
Searching on a (irregular) sphere
In pathfinding, one often need to use distance heuristics. That is functions able to (quickly) evaluate distances. Classic examples include Euclidean distance, for searching on a plane; or, Manhattan distance, for searching on grids. However, these functions are not accurate when it comes to estimating distance in road networks.
The Earth is an ellipsoid,1 an oblong object resembling an elongated sphere. For example, it is not technically correct to refer to “the radius of the Earth;” it depends where you measure.2 Thus, measuring using the Euclidean distance quickly accumulates error.
When computing distances, it is easier to consider a geodetic system. That is, use spherical coordinate. In this post, I will look at three distance functions, with different degrees of accuracy:
- The spherical projected distance.
- The great-circle distance.
- The Vincenty distance.
Different distance functions
There is more than one way to compute distances on the globe. The main question boils down to: how accurate do you want it? Obviously, the more accurate the function the more expensive it is.
- is the difference between the latitudes , in radians, sign does not matter.
- is the difference between the longitudes.
- The mean latitude is: .
- is the radius of the Earth.
The idea is to project a sphere (the Earth) onto a plane. This function takes into account the variation in distance depending on latitude.
The shortest distance on a plane is a straight line, on a sphere it’s a great-circle. An example of a great-circle on the globe is the equator. A great-circle divides a sphere into two (equal) hemisphere. The problem of finding the distance between two points on a sphere is thus to compute the arc between these points on the great-circle joining them.3 The distance is thus given by the spherical law of cosines:
For small distances and small float precision, the great-circle function can have large rounding errors. 4 In this case, we can use the haversine (half of a versine) formula:
Historically, this function was the de facto standard due to the availability of pre-computed tables, making it efficient to use by hand. Computing a distance on a sphere is then:
The second form uses the cosine form of the haversine to avoid the expensive square operation.
Finally, if you want a precision down to 0.6 mm, you can call on the Vincenty formula. This is a procedure that computes distances on an ellipsoid. But, first, there is a special case when the ellipsoid is a sphere which we can use:
It’s already a more involved formula. I won’t go into the details on the actual Vincenty formula, the most important thing is that it is an iterative procedure. It means that, as opposed to everything before, it’s not a single operation. The (direct) Vincenty formula finds the azimuths and ellipsoidal distance between two points. It refines its approximation up to a constant – except a few cases that cannot converge.
We will start by validating the code. No point in measuring performance if the results are bogus. I took the test case from movable-type.5 We will use two points in Australia: Flinders Peak and Buninyong.
First, let us convert the degree-minute-second notation to decimal degrees. The formula to convert is:
Now we can look at the results of calling these functions:
|Function||Result (km)||Difference (%)|
I think one interesting result from this is that geodesic functions underestimate the distance while Euclidean distance overestimates. The other is that, between these two points, all functions seems to have the same error – with the exception of the exact Vincenty procedure.
Using a database
I ran some tests by collecting known cases online. This is more of a sanity check rather than some actual tests.
The main result is that the approximate functions all seem to return the exact some values. The iterative Vincenty procedure is more accurate and for more points.
Now, one thing I like a bit too much is looking at assembly. I tried to engineer the code so the distance function themselves are not too tightly bound to the search code. I had a quick look at how the compiler would optimise the functions:
- spherical: projection onto the plane, less operations.
- haversine: optimised to minimise power operations6 and has a direct
- fast haversine: implemented using
sincospito reduce trigonometric operations.
- approximate haversine: omit some trigonometric operations (
sin, mainly) at the cost of precision.
- great-circle: similar to haversine but with the classic formula.
- vincenty: special case where the Earth is a sphere, resembles the two above.
I will not the exact Vincenty – not quite exact in this case7 – because it’s not a direct computation.
I counted the number of lines in the main function and its branches. This means that it is an upper bound on the number of operations executed as most are behind conditional jumps – especially for the fast haversine. I also defined a few constants to shave off one operation here and there.8
It is interesting that the
sin version compiles down to less instructions in the end. I tried fiddling with the code a bit but could not get the
cos version to bo shorter. Let’s see how they fare.
I decided to run benchmarks on a decent dataset: 24M nodes of the road network of the USA. I ran some preliminary testing on small test sets and could not get reliable results.
|Function||size||Total time (ns)||Time per query (ns)|
Euclidean, having no trigonometric functions, is orders of magnitude faster than the rest. Then comes the spherical projection which is much faster than geodetic functions. I am actually surprised by how slow the great-circle is, I was expecting something competitive with the haversine.
Finally, the last interesting bit of information from that table: tweaking the haversine one way or the other is actually slower. Simply having a well-written implement ion gets optimised better under the hood. It’s even more true for the fast version, which may good on GPUs but lags behind on CPUs. As for the approximate form, I don’t think a 5% improvement is worth it.
I had a quick look at the
perf report to see where we were spending time. Not surprisingly, we get the same distribution. Below is a summary of the data collected, stopping at internal functions. The way to read it is by comparing children’s time with their parents. E.g., if
warthog::geo::exact_distance took 26.35% of the total running time and its parent,
run_distance took 44.39%, it means
exact_distance took 59.36% of the profiling time.
Note: the distance functions are so fast that running the profiling code only takes 40% of total execution time, the rest is parsing the data.
--44.39%--run_distance | |--26.35%--warthog::geo::exact_distance | |--11.07%--__sincos | |--6.63%--__atan2 | |--1.27%--__ieee754_asin_fma | |--1.23%--__atan_fma | --0.72%--__tan_fma | |--4.46%--warthog::geo::fast_haversine | |--0.79%--__fma_fma3 | |--0.62%--fma@plt | --0.50%--__ieee754_asin_fma | |--3.55%--warthog::geo::vincenty_distance | |--2.09%--__sincos | --0.99%--__atan_fma | |--2.87%--warthog::geo::great_circle_distance | |--1.40%--__sincos | --1.04%--__ieee754_acos_fma | |--2.13%--warthog::haversine_heuristic::h | |--1.24%--__cos_fma | --0.55%--__ieee754_asin_fma | |--1.82%--warthog::geo::haversine_approx | --1.30%--__atan2 | --1.72%--warthog::geo::haversine --0.62%--__cos_fma
As a comparison, the Euclidean distance ran for 0.05% of the total time.
Now, let’s have a look at their precision. I did not manage to plot it because there are 58M datapoints. The five-number summary is below.
Interestingly, the exact procedure seems to have the same results as the approximate ones. I think this is due to the format:
- DIMACS coordinates are truncated to six significant digits then converted to
- We need to convert them back to double again.
- The distances are also encoded as integers with a 0.1 m resolution.
So we may be using functions with too high a precision to matter. Plus, all the rounding and converting may already be enough to shoot down the precision.
Right now, I feel like I want a better dataset to try the functions. The precision results are underwhelming and point to the format. However, we got some good insights on the speed of the different distance functions.
There is still one thing we do not take into account: elevation. A PhD candidate in my lab has very detailed maps which he uses to route electric vehicles. In this case, elevation matters. For example, electric vehicles can use regenerative breaking downhill. The longest path achievable on one charge depends on elevation changes.
An “oblate spheroid.” ↩
Which is unique if the points are not antipodal. ↩
But at most 0.5% with 64 bit floating points. ↩
I also followed their implementation. ↩
Although, from looking at the output, most modern compilers can inline power operations well. ↩
2 * EARTH_RADIUS. ↩