Dual nic routing in CentOS / RedHat.

Associate
Joined
10 Nov 2004
Posts
2,237
Location
Expat in Singapore
Hi,

I have two servers with dual nics. One Nic is for just about everything and the second nic on both machines are just for the two machine to talk to each other.

I am trying to find a decent guide on how to set up the routing for this to happen. The second server will be on a separate network (different datacenters ultimately) but at this point I am building the software and config stack on two virtual machines (ESXi).

Each servers two nics will use the same gateway but nic 1 will handle most (default) traffic and the second will route to the other machines IP address.

Say Server 1s gateway is 10.0.0.0/24 and server 2s gateway is 10.0.1.0/24.

For ESXi and my home network, both of these gateways will be virtual machines (CentOS) and connect to a virtual router (CentOS) which will connect to my internal Lan (192.168.1.0/24) for internet access (package downloads and patches).

S1-Nic1(10.0.0.1)->GW1(10.0.0.254)
S1-Nic2(10.0.0.2)->GW1(10.0.0.254)

S2-Nic1(10.0.1.1)->GW2(10.0.1.254)
S2-Nic2(10.0.1.2)->GW2(10.0.1.254)

GW1(10.0.0.254)->RT1(192.168.1.253)
GW2(10.0.1.254)->RT1(192.168.1.253)

RT1(192.168.1.253)->ISP Router(192.168.1.254) -> WAN (Internet)

Not sure the best tool to use to setup the routes (iproute2 seems to be the one to use). Not sure the rules to add.

Direction, tutorials etc would be great as most I have found only talk about setting up redundancy (failover) or teaming and not multi path routing.

If anyone is wondering, the second nics are for MySQL replication traffic only and is a build requirement.

Thanks
RB
 
As with this type of thing, I dont have the full picture of why etc but I cant see a reason why you would have two nics on the same subnet, but with different IPs - why not just bond them and keep the routing and arp tables clean?

Other than that, if you wanted the traffic separation why not logically separate the interfaces with different subnets/vlans/whatever and route it properly, either with each server as a next hop or through your core router/gateway?

But, you can use iptables to mark traffic, and ip rules to present the traffic from different routing tables / interfaces:

http://tldp.org/HOWTO/Adv-Routing-HOWTO/lartc.netfilter.html
 
As with this type of thing, I dont have the full picture of why etc but I cant see a reason why you would have two nics on the same subnet, but with different IPs - why not just bond them and keep the routing and arp tables clean?

Other than that, if you wanted the traffic separation why not logically separate the interfaces with different subnets/vlans/whatever and route it properly, either with each server as a next hop or through your core router/gateway?

But, you can use iptables to mark traffic, and ip rules to present the traffic from different routing tables / interfaces:

http://tldp.org/HOWTO/Adv-Routing-HOWTO/lartc.netfilter.html

Actually it could be that the two data centers will assign external IP addresses to the servers nics. I will have to confirm.

The problem with vlan / bonding is that I will not have access to the switches or routers in the data centers so I am trying to configure a solution with that in mind. The people who will be taking the servers and installing have very little knowledge of Linux or data centers and just want to plug and play. My services have not been contracted for installation, just provision of the machines and software/config to their specs.

Regards
RB
 
EDIT: I think I understand what you're getting at now.... I thought you meant this:

Code:
  [S1].1----10.0.1.0/24----.2[S2]
.1 |                          | .2
   | 10.0.0.0/24              |
  _|__________________________|_
                 |
               (GW) .254

But the servers are in different locations. :)

Having two NICs in the same subnet doesn't really make much sense unless you team them. Why not just have the second NICs in new subnets and have a static on each server to point the traffic out via that NIC? This will require the network team to configure new VLANs on the switches and interfaces on the routers (or both on the MLSs if that's what you're running in the Datacentres).

EG:

[S1] Nic1 10.0.0.1 VLAN 100
[S1] Nic 2 10.0.10.0 VLAN 110

[S2] Nic1 10.0.1.1 VLAN 101
[S2] Nic2 10.0.11.0 VLAN 111

[S1] Default gateway 10.0.0.254

[S2] Default gateway 10.0.1.254

Assuming the gateway for new subnet is .254 on each: (although a /24 is a bit of a waste for the dedicated 2nd NICs)

[S1] Static: 10.0.11.0/24 via 10.0.10.254
[S2] Static: 10.0.10.0/24 via 10.0.11.254


The proposal to try and use two NICs on the same subnet on each server just sounds like a headache in all honesty. I'm sure you could do it with host routes specifying the interface but again, it just sounds like something that people wouldn't want to support.

EDIT2: I might give this a go in my virtual lab at the weekend. Will let you know if I get a chance. :)
 
Last edited:
Are you not overcomplicating things?

First NIC on servers setup as regular for network traffic, Second NIC on servers with a private subnet and crossover cable. MySQLs configured to talk to each other over private NIC. DOn't need to route traffic on the same subnet.
 
Are you not overcomplicating things?

First NIC on servers setup as regular for network traffic, Second NIC on servers with a private subnet and crossover cable. MySQLs configured to talk to each other over private NIC. DOn't need to route traffic on the same subnet.

The machines will be in different data centers (Coloc) and so will be on different subnets.

RB
 
EDIT: I think I understand what you're getting at now.... I thought you meant this:

Code:
  [S1].1----10.0.1.0/24----.2[S2]
.1 |                          | .2
   | 10.0.0.0/24              |
  _|__________________________|_
                 |
               (GW) .254
But the servers are in different locations. :)

Having two NICs in the same subnet doesn't really make much sense unless you team them. Why not just have the second NICs in new subnets and have a static on each server to point the traffic out via that NIC? This will require the network team to configure new VLANs on the switches and interfaces on the routers (or both on the MLSs if that's what you're running in the Datacentres).

EG:

[S1] Nic1 10.0.0.1 VLAN 100
[S1] Nic 2 10.0.10.0 VLAN 110

[S2] Nic1 10.0.1.1 VLAN 101
[S2] Nic2 10.0.11.0 VLAN 111

[S1] Default gateway 10.0.0.254

[S2] Default gateway 10.0.1.254

Assuming the gateway for new subnet is .254 on each: (although a /24 is a bit of a waste for the dedicated 2nd NICs)

[S1] Static: 10.0.11.0/24 via 10.0.10.254
[S2] Static: 10.0.10.0/24 via 10.0.11.254


The proposal to try and use two NICs on the same subnet on each server just sounds like a headache in all honesty. I'm sure you could do it with host routes specifying the interface but again, it just sounds like something that people wouldn't want to support.

EDIT2: I might give this a go in my virtual lab at the weekend. Will let you know if I get a chance. :)

Sorry, didn't see the notification of your reply.

On the assumption of a public IP being added for each NIC I have currently set routing with;

Server 1
ip route add [NIC2 on server 2 IP] via [datacenter 1 gateway] dev eth1

Server 2
ip route add [NIC2 on server 1 IP] via [datacenter 2 gateway] dev eth1

I am just sorting out some test data and a script to autonomously add more, then I will get the second server (still in ESXi) up to the same software stack level of the first and then I will test replication and then pull both NIC cables (virtually) and see if the replication continues to run and what happens for any other traffic. Likewise I will test by pulling one of the servers nic2 connections and see what happens then.

RB
 
Back
Top Bottom