Google Maps API idea

Soldato
Joined
11 May 2007
Posts
9,022
Location
Surrey
I want to either learn to write or (more likely) assemble a team in order to build a National Rail based API, where users could scroll around a map, looking at the location of rail stations and ultimately see an overlay of information about journey times, season ticket prices, first and last trains etc, all data which is available on the National Rail site.

Anyone interested in pointing me in the right direction?
 
Probably more likely the latter, though I was hoping that the tool would be free, as far as I'm aware a business API license is needed in order to profit from Google Maps data.

I was thinking along the lines of more a team of hobbyists/practicing skills/learning something new...
 
Should be quite straightforward. Do National Rail have a XML/JSON feed you can get information from, or would you have to scrape it on a regular basis to keep it up to date?

They have something since they also have the webapp, but if its free or not is another issue :)
 
Could it not scrape data from their own website rather than an app? And yes there must be a shed load of data. But I guess once the correct fields are identified? and terms quantified? ie start station, end station, then it should be able to pool the information it needs quite easily?

For example:
Sutton to Victoria
Fast journey time (0 changes) ~30 mins
Slow journey time (2 changes) ~ 50 mins
7 day railcard: £38.20
7 day travelcard: £49.80

I suppose depending on how National Rail stores it's data is down to whether this is feasible or not? And also making sure it was 'tamper proof' ie if National Rail didn't like the tool I was producing and they changed the way they formatted their data then my app would be buggered so it would need some sort of failsafe?
 
Could it not scrape data from their own website rather than an app?

feasible or not?

This is what is called an api this is what mobile apps use to get their data for example, its a way of getting data without having to parse a design (in case one changes).


Everything is fesable it all depends how much funding/time you have to a) keep up with their design changes b) implement something in the first place.
 
Indeed, I'm familiar with the term API, so I know it exists as a 'common ground' between in this instance network rail and googlemaps so the data from one can be used in the other. However the comment I responded to was about National Rail apps, I don't think the information I'd want is available on there, so the API would have to find the data on their website, but I have no idea in what form that exists.

I've got a year and no money. :D

Yes a) that scares me, putting in a lot of work for them to change the format of their data, b) id only be interested in perhaps putting it up on the web. Not as an app.
 
As some others have said, it's a pretty simple implementation but the problem is getting the data, there is going to be a mountain of it and merely scraping it from a website would take a long long time and you would probably get blacklisted pretty quickly.

You need to get access to a proper API for the data or have some kind of collection of data to feed into the application, once you can get your hands on that then the rest should be pretty simples!
 
Sounds like National Rail are ***** through to the core. I thought it was just surface level arseholishness (unjust rail fare increases, lack of information about trains, dirty trains, rude staff) but it goes right through!

So to avoid paying money for publicly available information... I assume one would get around that by writing a program which populates fields on their website and collects the results? Though I would assume their site would have measures against this? Or is my thinking totally ridiculous?
 
Last edited:
Thats just a web scraper. You enter a url which chucks back the html and you process it to extract the information you want. The only issues you can really run into with this is if you are hitting their site either too regularly for them not to notice, or, you are hitting it at the exact same time each day and running through the same pages.

You don't normally need to populate fields, you simply do a manual search and take a look at the filters / variables being used in the url.

Take a look at googles url when you do a search and you will see that you can adapt them to tailor the results.
 
Last edited:
Thats just a web scraper. You enter a url which chucks back the html and you process it to extract the information you want.

erm, no. It's a bit more than that. It is a web service, which to all intents and purposes is a web based api. If they didn't have the license restriction and the need to have and send a valid token you could easily knock up a powershell script to connect to it and send queries to it. The only url you need to talk to is

https://realtime.nationalrail.co.uk/ldbws/wsdl.aspx


Stick...

get-help New-WebServiceProxy -Full

into powershell for an easy example of talking to an open web service. The website the example connects to is worth a look at as well as they have various other web services to connect to.

If all you where doing was passing your own urls to download the html and then trying to parse it yourself I'd probably either use something like...

(New-Object Net.WebClient).DownloadString("http://somewebpage.com/")

or possibly a COM object, for IE for example, and try and manipulate it that way.

Which I agree with you, you could likely get away with as long as you didn't hammer it to much to make it stand out from normal browser access but then it gets back to the other issues discussed above.

Or, I've totally misunderstood your post? :confused:
 
No that makes sense to me, pretty much as I imagined including the drawbacks, I like it when things I think are possible (then realise they've existed for years and I'm totally behind and not some sort of internet lord).
 
Back
Top Bottom