OpenStreetMap

Deelkar's Diary

Recent diary entries

Distributed Map rendering

Posted by Deelkar on 24 June 2008 in English.

After my rather pessimistic post last time, I want to say that I'm basically optimistic about the progress in OSM rendering efforts in general and tiles@home in special.

There are, however, fundamental differences between centralised map rendering and distributed map rendering. Some are obvious, others less so.

Let me explain:
The obvious difference is, well, one is centralised and runs on one or maybe a couple of central servers, that can, hopefully, render everything "on demand". Mapnik can do this. The main advantage is, that this is highly efficient, as only "interesting" parts of the world are rendered to exactly the level of detail needed. The process is also fast and scalable enough to suit our current needs. However this comes at the cost of a specialised dataset, that cannot be updated by diffs, so while the throughput and rendering speed are very good, the latency is very bad. (currently up to one week). Many consider this a major flaw, which is why projects like tiles@home were started. The advantage of the distributed method is, that it has (theoretically) a very low latency, and even in practise the "osmarender" layer generated by this project is generally up-to-date within hours of the corresponding edits.
The downside is that since the tiles have to be generated from live data, it doesn't make too much sense to request data from the api for every little tile, so we do it with tilesets, that is, the area of one z12 tile is downloaded from the api and then all tiles for that zoom up to zoom 17 in that area are generated, regardless wether or not anyone will ever look at them.
This is efficient in a "Save API resources" way, but not on the central server that has to manage the tiles in the end.
Of course, like with the centralised rendering you can try to split the load between multiple servers which would relieve things somewhat, but then there will be another bottleneck, namely the API providing the data. There has to be another data server besides the API, maybe kept up to date with the minutely diffs or some kind of replication mechanism just to serve the bulk requests from renderers.

So basically the two methods of rendering are complementing each other, and while it would be possible to remedy the shortcomings of either way it's not easily done. And even if, the possibility of 2 different renderers in existence can give a glimpse of what is possible with OSM data.

this doesn't seem to work

Posted by Deelkar on 5 May 2008 in English.

There are some problems with the t@h approach to rendering maps, but there are two that might develop to the point where they become show-stoppers:
(from my point of view)

A) The server is too slow.
B) The clients can't handle certain tiles.

Yes, the server is too slow. It cannot handle enough clients to keep the world up to date soon, let alone retroactively re-render everything that need to be rendered when a layer gets added to the portfolio of things we want to show.
For almost all year the server has not shown a similar improvement in speed to everything else OSM.

Also since several city-tiles are getting very complex, there are fewer and fewer clients that can infact render those tiles. Next to nobody has a PC with 12+ GB of RAM laying around. The problem being that if we start precutting the tiles into more manageable chunks the osm to svg transformation becomes prohibitively slow. There should be a more effective way of keeping svg complexity down when rendering high-zoom subtiles of densely mapped city areas, other than mucking around with the osm data.

New tiles@home server

Posted by Deelkar on 8 November 2007 in English.

Yesterday Sebastian Spaeth (spaetz), Christopher Schmidt (crschmidt) and I set up the new server that will host the tiles@home project to relieve the dev server.

It's a nice machine with lots of fast storage, so currently we're seeing a speedup of at least factor 2. The actual speed increase is difficult to see, because the currently running clients do not manage to keep the queue to any significant length, so if you have a tiles@home upload account, update your client to the newest revision and start rendering!

There are some small issues still, like there is no fall-through to the old dev tiles yet, but nothing that would hinder normal rendering, and we're working on those so they should disappear soon.

as Steve Coast would say:
Have fun :)

tiles@home

Posted by Deelkar on 1 November 2007 in English.

Some of you might know that I'm one of the core developers of the tiles@home rendering client, and sometimes work on easy parts of the serverside code.

Currently I'm more in "firefighting" mode than anything else, only to be stopped by lack of free time and the slowness of the dev server, which runs the serverside stuff.

What I like to see would include a tah.openstreetmap.org server, separate from dev, not a vhost, with it's own disks, and tuned a bit better to meet the requirements of the project. (i.e. lots of fast hdd space, lots of bandwidth, maybe a bit more RAM than dev currently has), but since OSM is a very "if you want something done do it yourself" sort of run project this will take a while, since I don't have the resources in time and money to go to london and set said machine up, apart from the fact that I can't even get into the server room probably, so my only hope is that the publicity and popularity of tiles@home in the OSM project grows to a point where the admins that do have physical access to the servers and the people with the money see a necessity to give it its own server.

Current problems that I'd like to fix, but that are too complex for my time I can dedicate to them:
- use new rasterizer. Inkscape sucks.
- implement better client-server communication to handle error situations
- improve detection and handling of errors in client, especially systemic errors like broken external software
- ...