I have a number of secret weapons that I have discovered over the years working with Bing Maps.One of the simplest, but coolest is known as “Vertex Reduction”. What is this you ask, well, it basically allows you to reduce the number of vertices based on a tolerance. When the spatial tools came out in SQL 2008 one of the best features I found in their was the Reduce method. This method allowed us to reduce the resolution of complex shapes. This made a huge difference when working with ShapeFile data as I often found the resolution of data in that format to be higher than needed for web based application, especially when zoomed out. In Bing Maps V6.3 we have seen that there is some optimizations that go on with the shapes that are based on the zoom level. These optimizations basically reduce the resolution of the shape. When creating a bunch of tools for the Bing Maps Silverlight control a few years ago I found that this optimization that is in V6.3 was not being used. I also found that all shapes wee constantly being re-rendered when the map moved, even if they were not in view. I decided to see if I could make some optimizations. In the end I created a set of classes capable of rendering 9MB worth of complex polygon data in the Bing Maps Silverlight control with no performance issues. Combining this with the OptimizedMapLayer class I created I was able to get 100MB of data to render with no performance issues when panning and zooming. So how did I do it.

**Douglas-Pucker Algorithm**

The reduce method in SQL 2008 is rumored to be based on the Douglas–Peucker algorithm. This algorithm has a worst case time of O(*nm*) and an expected time of O(*n log m*) where m is the size of the simplified list of coordinates. This isn’t that bad, but is dependent on the output, if m is small then this algorithm will run fast, but if it is large them it will run slow. For instance, if our resulting list of coordinates is reduced to 10, then *log m* = 1. Thus the complexity would be O(*n*). If m is any larger than 10 then performance would degrade. Since the whole purpose of wanting to use something like this is to reduce the resolution of complex geometries to improve performance, chances are *m* will likely be larger than 10 in most cases. This might work, but I had a feeling it was worth doing more investigation to see if there was something better.

**Vertex Reduction Algorithm**

In my searching I came across a method called the Vertex Reduction Algorithm. It had a complexity of O(*n*) all the time. This is much better when dealing with a large number of coordinates. This method basically starts off with the first vertices removes any clustered vertices that are too close based on some tolerance. Having done some performance tests between these to algorithms and also some visual tests on the resulting rendered shapes I found this algorithm to be the best. Further more, since I wanted to change the resolution as the zoom level changed I found that the ideal tolerance for each zoom level could be calculated using the formula.

Don’t ask how I came up with it, there was a lot of math involved and it’s based on how the tile system working in Bing Maps. In this case the tolerance is a distance in degrees. At zoom level 19, the tolerance would be approximately equal to 0.00001 degrees which is pretty close to a single pixel. Meaning that we will only have one coordinate rendered per pixel which is great as having two or more coordinates in the same pixel is just a waste.

So, without further a due, here is the C# version of this algorithm which I use with the Silverlight control to optimize my shapes.

public static LocationCollection VertexReduction(this LocationCollection locations, double tolerance) { //Verify that there are at least 3 or more locations in the LocationCollection if (locations == null || locations.Count < 3) { return locations; } LocationCollection newCoordinates = new LocationCollection(); //Store the initial location newCoordinates.Add(locations[0]); Location baseCoord = locations[0]; //Remove expensive square root calculation from loop and compare to squared tolerance double sqTolerance = tolerance * tolerance; for (int i = 1; i < locations.Count - 1; i++) { //check to see if the distance between the base coordinate and the coordinate in question is outside the tolerance distance. if (Math.Pow(locations[i].Latitude - baseCoord.Latitude, 2) + Math.Pow(locations[i].Longitude - baseCoord.Longitude, 2) > sqTolerance) { //store the coordinate and make it the new base coordinate for comparison. newCoordinates.Add(locations[i]); baseCoord = locations[i]; } } //store the last cooridnate newCoordinates.Add(locations[locations.Count - 1]); //return the new coordinate collection return newCoordinates; }

Besides using it with the Silverlight control this could be used in a lot of other places as well for reducing the resolution of complex shapes. I use this in some custom code I use to import data into SQL 2008 with a tolerance of 0.00001. This is great as I end up only store data with a lowest possible resolution that I need which will not be noticeable when rendered on a map.

Now, I thought about leaving it here and letting you figure out how to implement this, but instead I decided to take the MultiShape library I created for the Silverlight control a while back and add this too it. I have a library that I use all the time in Silverlight map applications that is almost identical to this and have had fun rendering thousands of polygons that had millions of coordinates within seconds on the map. You can download the optimized version of MultiShape library I created here. Enjoy.

Pingback: MultiPolygon & MultiLineString Classes for Bing Maps Silverlight « Ricky's Bing Maps Blog

Hi,

I would like to point out some of your code can be improved. Your current implementation does not support data binding and the generation of MultiShapeBase child classes via templating.

First of all, the way you implement the dependency property Vertices could be improved to a proper depencency property implementation, as its current state does not support data binding:

public const string VerticesPropertyName = “Vertices”;

public IList Vertices

{

get { return (IList)GetValue(VerticesProperty); }

set { SetValue(VerticesProperty, value); }

}

public static readonly DependencyProperty VerticesProperty = DependencyProperty.Register(

VerticesPropertyName,

typeof(IList),

typeof(MultiMapShapeBase),

new PropertyMetadata(null, VerticesChanged));

public static void VerticesChanged(DependencyObject d, DependencyPropertyChangedEventArgs e)

{

LocationsChanged(d, e);

}

Second, because you assume the parent is a layer, using Item and Data templates is impossible as well. If you implement it like this, you can:

public MapBase ParentMap

{

get

{

var projectable = Parent as IProjectable;

if (projectable == null)

{

var parent = VisualTreeHelper.GetParent(this) as ContentPresenter;

if (parent != null)

{

projectable = VisualTreeHelper.GetParent(parent) as IProjectable;

}

}

if (projectable != null)

{

return projectable.ParentMap;

}

return null;

}

}

Net result: I can no use your code in the following setup:

In which “Bakken” is a list of objects containing a Geometry property of type IList

Cheers,

Joost

Hi! Love you blog.

Support for the WPF/Bing? ProjectionUpdateLevel doesn’t seem to be in the WPF/Bing for the MultiPolygon and MultiLineString Classes (only in Silverlight).

I’m importing shape files and have all the pieces. I’m using the SQLGeometry class to validate the data.

In a perfect world, we’d have Map.Add(sqlGeometry) — and VIOLA, no conversions. :-)

Thoughts? What would you suggest to be able to get a SQLGeometry into a map?

I agree it would be nice to have a simple method to add SqlGeomtery/SqlGeography objects to the map. I’ve created some tools in the past to support this but nothing I can post. What you can do is create extensions to add in support for adding this kind of data. It would require some work to set up but is something that is technically possible.

Do you have the sample code for your previous article: “Optimize Map Layers in Bing Maps Silverlight”?

The sample code is right inline in the blog post here: https://rbrundritt.wordpress.com/2010/11/19/optimize-map-layers-in-bing-maps-silverlight/ There is no downloadable zip file for the example.

Pingback: MultiPolygon & MultiLineString Classes for Bing Maps Silverlight - Ricky's Bing Maps Blog - Site Home - MSDN Blogs