Creating a google transit feed for fun and profit

Apr 23rd, 2009


People frequently ask me how I manage to collect and input the data that is used by, given Metro Transit’s intransigence. The “bike and GPS” angle is well known by now, but what about the rest of the process? How do I get the data into a format that can consume?

The defacto standard for the interchange of transit information is Google Transit Feed (GTFS). This exceedingly simple comma seperated value format is now supported by a plethora of software, including Google Transit, graphserver, as well as my very own libroutez (used by It was obvious to me right from the beginning that the first step to building would be to create one of these feeds.

Manipulating a GTFS by hand is probably not a great idea. It’s basically a dump of a relational database, and is pretty inscrutable from the point of view of a human being. What I really want to be able to do is be able
to manipulate things on the level of stops, service periods, and routes: and let some kind of abstraction layer take care of the low-level details. Fortunately, the awesome engineers at google created a python library called Google Transit Data Feed, which can help with creating one of these things by providing abstractions of the key elements of a google transit feed (stops, service periods, etc.). You can then write a program which uses these abstractions to create and save a GTFS.

Of course, providing the library appropriate information is easier said than done. Metro Transit’s PDF schedules are not readily computer parsable (being designed to be printed out, after all). I needed some kind of semi-automated way of converting a Metro Transit schedule into GTFS, or this whole project was
going nowhere fast.

As an initial step, it turns out that it’s quite possible to extract textual information from a PDF using the open source popplar library. From there, it’s possible to extract the stopping times for an individual bus route. Let’s give an example. For example, let’s take the case of adding the 60 (Portland Hill’s route), something I’m currently working on. All I had to do was download the PDF file from Metro Transit’s site and then run the following on the command line:
<br /> pdftotext -raw route60.pdf<br />
The raw option basically makes sure the raw strings are dumped to disk, and that no attempt is made to preserve formatting. The result is a text file with content like this in it:
<br /> 842a 847a 855a 858a 903a 906a 912a -<br /> 857a 902a 910a 913a 918a 921a - 925a<br /> 910a 915a 923a 926a 931a 934a 940a -<br /> 940a 945a 953a - 1000a 1003a 1009a -<br /> ...and every 30 minutes until<br /> 210p 215p 223p - 230p 233p 239p -<br />
This type of format can be parsed easily enough. To create a proper transit feed though, schedule information isn’t enough: you also need to know the locations of the stops, names of routes, etc. After some deliberation, I came to the determination that I needed some kind of intermediate format to store the above schedule information and this additional information. It would be readable both by humans (to ease its creation) and machines.

The obvious markup for something like this is YAML (if you’re still using XML to store structured information, run, don’t walk, and look at YAML: you can thank me later). Simple, clean, effective. GTFS is still the better choice for using the information in another application as its representation is much more amenable to being stored in a graph. Here’s a few examples of my YAML format in action:

7 (Robie to Gottingen)
10 (Westphal)

Besides the scheduling information, the other main interesting component of a GTFS is the location of the stops. As anyone who’s used a Metro Transit schedule has noticed, only major timepoints are covered in the PDF schedules. What of all the stops in between? This is where the bike and GPS come in.

What I did was take a standard GPS from Mountain Equipment Co-op (The Garmin GPSMap 60x), get on my bike, take the readings of individual gotime numbers and positioning information, of the individual stops between the major timepoints. I then took this device back to my computer and, using a utility called GPSBabel, dumped out the stop information in a format called “comma seperated value”. It looks like this:
<br /> 44.65825, -63.59252, 6785-21-31-33-34-35-3-7<br /> 44.65982, -63.59452, 6768-21-31-33-35-86-3-7<br /> 44.66113, -63.59659, 6782-21-31-33-34-35-3-7<br />
The first two items are latitude and longitude, providing the positioning of the stop. The last item is a gotime number, followed by the set of buses which pass by the stop. Turning this into YAML is a matter of applying
the following regular expression to the input:
<br /> \([0-9]+.[0-9]+\), \(-63.[0-9]+\), \([0-9]+\)- -> - { name: xxx, stop_code: \3, lat: \1, lng: \2 }<br />
To get an actual name for the stop (i.e.: “Gottingen and Young”), I wrote a simple script which finds the nearest intersection close to the stop in the GeoBase dataset. I then (at my discretion) corrected it based on my on-the-street knowledge of the layout of Halifax as well as adding certain details to help the user (e.g. bus stops on the way to the south end of Halifax are marked “south bound”).

With these two elements in place (a format for creating human-readable transit information and a library for creating GTFS), the only thing left to do is create a program which bridges the gap. Behold, the magic of With all of this in place, creating a google transit feed for Halifax is a simple matter of typing “make”.

Is this a ridiculous amount of work? I wouldn’t say so. The vast, vast majority of my work on has been in creating the pathfinding code and geocoding functionality. This is work that can be translated to many different municipalities, and can easily be extended and made more useful in a myriad of ways.

What does seem a little intimidating to me is completing what I started. Capturing bus stop information for the Halifax peninsula is one thing, but covering the outlying areas (Bayer’s Lake, Sackville, etc.) is quite
another. There’s a lot of biking involved there, more perhaps than what one person can reasonably be expected to do. It was my hope that the initial release of hbus would validate the model of community-developed transit software to Metro Transit and they would see the benefit of releasing their internal copy of this data to the public, but unfortunately that doesn’t seem to have happened.

Getting that problem solved seems to be more a political problem than a technical one, and it’s not my specialty. It really does make me wonder if I shouldn’t reconsider the option of crowd sourcing, which I had
rejected earlier. and thoughts about crowdsourcing

Mar 25th, 2009


hbus in action
hbus in action

So I opened up my baby,, to the public last week (though traffic only really started to pick up yesterday, after a positive article in the daily news). This site, a trip planner for the Halifax Regional Municipality, was the cumulation of about 6 months of part time work on my part, between contracts for my awesome company, Navarra.

I’m debating on whether or not to start a seperate blog for hbus. At the moment I’m leaning towards no: my thinking is that most people don’t care about the inner workings of a site like hbus. They just want to figure out how to get from point A to B. Those who do care can read the rest of what I (and my part time co-conspirator, Peter McCurdy) have to say.

The most glaring limitation in hbus right now is that its route coverage is woefully limited. Trips on the main Halifax peninsula are generally planned pretty effectively. If you’re travelling to a suburban area like Bayer’s Lake or Burnside, not so much (unless you’re lucky enough to be starting/ending near a bus timepoint). What is to be done?

A frequent suggestion I get from more technically minded folks is that I should “crowd source” the missing information. This basically implies creating a wikipedia-like architecture such that people could contribute their favourite stops, routes, etc.

It’s a tempting idea. Such sites as OpenStreetMap show that this approach can be very effective for gathering large amount of geographical data. I’m frankly not convinced it’s the right approach here though. The fact is that Metro Transit MUST have a complete set of stops, route schedules, and route plans internally. There’s no way they could plan their operations halfway effectively otherwise. Why should I burden the public with the task of recreating something which has already been done?

I may be crazy, but I think the best avenue for the moment is to try to convince Metro Transit that it would be worthwhile to make this information public. I paid for the generation of the information with my tax dollars, why shouldn’t I be able to make use of it? The preferred format for this information is Google Transit Data Feed, but I could make use of information in just about any representation (Arc GIS, etc.). Just give me what you have, and I’ll take of the rest. Over 20 of the most successful transit agencies in North America (many of them much bigger than Metro Transit) have opened up their information to the public, with only positive results.

The most obvious use of this information is a trip planner and, yes, I know every agency and their dog has (or will have) one of these. But maybe someone has a cool idea on how to make a trip planner easier to use (compare with Tous Azimuts). Or what about transit maps that help people figure out where to live? Or iPhone and Blackberry applications? Or cool screensavers? Or or or. The possibilities are truly endless once the data is out there. Come on Metro Transit, you have nothing to lose and the eternal love of your ridership to gain.

Breaking news: Cat watches himself in TV

Mar 6th, 2009


Cat watching himself

Originally uploaded byWilliam Lachance

Been meaning to update this more, but I’ve been rather busy. Quite a bit of excitement going on, though I haven’t been making time to blog about it (I’ve been more into the twitter thing lately).

More soon.

Maps URLs on mobile Safari

Feb 2nd, 2009

I’ve been experimenting a little bit with maps urls on the iphone. If you’ve read Apple’s web developer guidelines, you’ll know that URLs of this form will automatically redirect to the maps application:

Halifax, Nova Scotia
Halifax, Nova Scotia

This is fine if you just want to highlight one particular location (with no custom metadata), but what if you want to do something more interesting, like display a KML file? You can load these easily from the maps application, so why can’t you link to them from a web browser? The URL guidelines explicitly say that the KML part of a query string will be discarded, and indeed it is. What is a web developer to do? Resort to undocumented behaviour, of course! At least in version 2.2 of the iphone software, URLs which request a “maps” resource with the appropriate parameters will automatically load the appropriate KML file in the maps application:

[Map link][2]
Map link

Scotia [2]: maps://?geocode=&q=