Loading AI tools
Curve simplification algorithm From Wikipedia, the free encyclopedia
The Ramer–Douglas–Peucker algorithm, also known as the Douglas–Peucker algorithm and iterative end-point fit algorithm, is an algorithm that decimates a curve composed of line segments to a similar curve with fewer points. It was one of the earliest successful algorithms developed for cartographic generalization. It produces the most accurate generalization, but it is also more time-consuming.[1]
The starting curve is an ordered set of points or lines and the distance dimension ε > 0.
The algorithm recursively divides the line. Initially it is given all the points between the first and last point. It automatically marks the first and last point to be kept. It then finds the point that is farthest from the line segment with the first and last points as end points; this point is always farthest on the curve from the approximating line segment between the end points. If the point is closer than ε to the line segment, then any points not currently marked to be kept can be discarded without the simplified curve being worse than ε.
If the point farthest from the line segment is greater than ε from the approximation then that point must be kept. The algorithm recursively calls itself with the first point and the farthest point and then with the farthest point and the last point, which includes the farthest point being marked as kept.
When the recursion is completed a new output curve can be generated consisting of all and only those points that have been marked as kept.
The choice of ε is usually user-defined. Like most line fitting, polygonal approximation or dominant point detection methods, it can be made non-parametric by using the error bound due to digitization and quantization as a termination condition.[2]
Assuming the input is a one-based array:
# source: https://karthaus.nl/rdp/
function DouglasPeucker(PointList[], epsilon)
# Find the point with the maximum distance
dmax = 0
index = 0
end = length(PointList)
for i = 2 to (end - 1) {
d = perpendicularDistance(PointList[i], Line(PointList[1], PointList[end]))
if (d > dmax) {
index = i
dmax = d
}
}
ResultList[] = empty;
# If max distance is greater than epsilon, recursively simplify
if (dmax > epsilon) {
# Recursive call
recResults1[] = DouglasPeucker(PointList[1...index], epsilon)
recResults2[] = DouglasPeucker(PointList[index...end], epsilon)
# Build the result list
ResultList[] = {recResults1[1...length(recResults1) - 1], recResults2[1...length(recResults2)]}
} else {
ResultList[] = {PointList[1], PointList[end]}
}
# Return the result
return ResultList[]
The algorithm is used for the processing of vector graphics and cartographic generalization. It is recognized as the one that delivers the best perceptual representations of the original lines. But a self-intersection could occur if the accepted approximation is not sufficiently fine which led to the development of variant algorithms.[3]
The algorithm is widely used in robotics[4] to perform simplification and denoising of range data acquired by a rotating range scanner; in this field it is known as the split-and-merge algorithm and is attributed to Duda and Hart.[5]
The running time of this algorithm when run on a polyline consisting of n – 1 segments and n vertices is given by the recurrence T(n) = T(i + 1) + T(n − i) + O(n) where i = 1, 2,..., n − 2 is the value of index
in the pseudocode. In the worst case, i = 1 or i = n − 2 at each recursive invocation yields a running time of O(n2). In the best case, i = n/2 or i = n ± 1/2 at each recursive invocation yields a running time of O(n log n).
Using (fully or semi-) dynamic convex hull data structures, the simplification performed by the algorithm can be accomplished in O(n log n) time.[6]
Given specific conditions related to the bounding metric, it is possible to decrease the computational complexity to a range between O(n) and O(2n) through the application of an iterative method.[7]
The running time for digital elevation model generalization using the three-dimensional variant of the algorithm is O(n3), but techniques have been developed to reduce the running time for larger data in practice.[8]
Alternative algorithms for line simplification include:
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.