Software/service to duplicate a data feed to two seperate servers.

Associate
Joined
10 Nov 2004
Posts
2,237
Location
Expat in Singapore
Hi,

I am looking for a solution for a client.

They have two servers with a constant (24*7) data feed (UDP).

They would like the single feed to go to both servers and if one server should be down, there needs to be the ability to queue the data feed for only that server which could be released once the server is back.

Any service (cloud service) or software anyone is aware of that could do this.

Thanks
RB
 
That is slighty more problematic, if the data is simply stored on local storage once it reaches it's destination, would there be any harm in simply monitoring the destination folder/folders at each node, and replicating between should one "lag" behind?
 
That is slightly more problematic, if the data is simply stored on local storage once it reaches it's destination, would there be any harm in simply monitoring the destination folder/folders at each node, and replicating between should one "lag" behind?

Unfortunately I believe the data flows in to a Postgres database straight from the feed. The feed is free and the software capturing the feed is open source.

The bottom line is trying to get rid of as many single points of failure as this is the point of them buying a second server. Second server, second datacentre.

Are there any other alternatives that allow a single domain name feed two seperate ip addresses at a distributed layer (i.e. at DNS so a single server down does not knock out the whole system).

RB
 
Does the source server use a FQDN to find the target server for the data stream?

If so, something like Simple Failover might do the job.

Thanks for that. This looks good for another requirement.

Ok, the client is being a bit secretive but it seems a hardware device sends data to his server via either IP address or FQHN. The client has a piece of software that listens to the port on the server for UDP datagrams then loads them in to a database.

My guess it is a tracking system (stolen car or bikes, bus tracking, mobile phone tracking or something like that).

RB
 
sounds like you would need to have a proxy type device in between which will manage the links and queue data as needed. although this provides another single point of failure and would probably be a custom solution..

as mentioned, you would be better to have database replication.

imo you will need a custom solution as you are wanting to queue a protocol that was designed not to be queued.
 
sounds like you would need to have a proxy type device in between which will manage the links and queue data as needed. although this provides another single point of failure and would probably be a custom solution..

as mentioned, you would be better to have database replication.

imo you will need a custom solution as you are wanting to queue a protocol that was designed not to be queued.

Yeah, I am coming to the same conclusion but it is good to hear it from others as well.

Thanks all.

RB
 
Depends how reliable it must be...the easiest solution would be to use any number of linux tools (I'm 99% sure you can do it with iptables) to duplicate an incoming data stream to multiple end points, then protect that with heartbeat/your choice of linux HA.

You will loose a bit of data during failover though, maybe only a fraction of a second if you configure it well but I can't see it being possible with no loss.

The far more sensible way is to get multiple feeds configured at the source end which will be far easier and more reliable...
 
could be there could be a f5/bigip product/device to do it, depends on budget restrictions

Budget with as with most of these things for small businesses is unspecified :(. They came to me as they wanted a new production server with a mind to put their current, only server, in to use as a spare. In the end they went for a 1K (GBP) 1U unit (E3-1230, 16GB ECC ram, M1015 SAS card, Supermicro server board, case and PSU). Two separate datacentres are to be used for the prod and backup.

They then needed help configuring it and stated having a preference for the data feed to go to both servers with no break in the data and no downtime of service for recovery. It seems they need historical tracking data as part of their service offering.

On discussion of set-up a number of holes were seen and so a couple of questions were put forward for them to think about (testing for patching / new software releases etc, a second server mirroring the fundamental hardware of the first, what if scenarios for them to try to resolve with current infrastructure and the proposed new server).

The current proposal is that they replace their backup machine with one to mirror their production server (1/2 memory for now), have a test machine (same Supermicro X9SCM-F-O board as the prod and backup machines with 4GB ECC ram) and have 3 atom D525 based machines to fill the roles of 2xreceive->Queue->Duplicate->Deliver servers and one Simple failover server. It is coming in around 3k (GBP) excluding bespoke software and a simple failover license. The atom units are consumer grade though, jumping to server grade will bump the price up by around S$300 and the servers do not have drives (client is sourcing themselves for the ES SAS drives).

Depends how reliable it must be...the easiest solution would be to use any number of linux tools (I'm 99% sure you can do it with iptables) to duplicate an incoming data stream to multiple end points, then protect that with heartbeat/your choice of linux HA.

You will loose a bit of data during failover though, maybe only a fraction of a second if you configure it well but I can't see it being possible with no loss.

The far more sensible way is to get multiple feeds configured at the source end which will be far easier and more reliable...

Yeah, I had seen the preprocess directive in IPTables which looks good for the duplication but it is the queueing and replay that would seem to be the sticky point. Without the queueing we are back to re-syncing the backend databases with zero downtime.

Currently looking at the following solution for the data feed;

homerecovery.jpg


FQDDServer.jpg


I could also add a hearbeat between Server 1/2 and the Simple Failover Server so they could report if that unit is down.

To be honest, I really do not know if they will want to go for any of this as during the last discussion, one of the clients suggested maintaining the old server as backup but not patching it at all and just having a test server for testing patches to their production system only :confused:.

RB
 
You're all trying to design this with hardware when the solution is OOTB with a properly designed service-oriented architecture. There are open-source Enterprise Service Bus solutions that handle all the data distribution and data queuing for you.
 
You're all trying to design this with hardware when the solution is OOTB with a properly designed service-oriented architecture. There are open-source Enterprise Service Bus solutions that handle all the data distribution and data queuing for you.

Great,I will have a search about although some recommendations would be most helpful.

RB
 
Back
Top Bottom