Odd iproute2 gateway behavior with two gateways

After helping a frustrated yet patient Elvar I thought I better post about the situation we came across incase anybody else finds themselves in the same situation.

Elvar started with all the correct elements to setup a functioning multrouting gateway. Two connections from two different providers, eth1, eth2. Running on an Ubuntu box with eth0 as internal. Both internet connections working on their own.

But alas. Whenever both eth1 and eth2 were active on the host, outgoing packets just would not go out. I don’t know if incoming packets were being replied to as we were unable to check that.

If just eth1 or eth2 was active than everything traversed ok. But we wanted it to work with both connections.

After a LONG time of diagnosing we noticed there were two default routes in iproute2 (ip’s fudged, but you get the idea):

firewall# ip route show
default via dev eth2  metric 100
default via dev eth1  metric 100

My firewall often has two default routes listed on the main table (ppp0, and ppp1) until the cleanup script fixes it. Without any negative side effect. I may just be lucky though.

Upon removing the eth2 line from table main, everything started working correctly. Incoming, outgoing, forwarding, balancing.

I also noticed the output of ifconfig eth2 looked a bit screwed too, but there was not much we could do about that as it was assigned by dhcp.

eth2      Link encap:Ethernet  HWaddr 00:11:22:33:44:55
    inet addr:  Bcast:  Mask:

See it? No not the MAC address. The broadcast address. A quick ipcalc gives me a broadcast of But once we removed the eth2 default route line, it all started working again and didn’t get to dig into it to see if the broadcast actually affected it.

So that’s all. Just remember there are a lot more things to go wrong in a multigateway setup, including things outside of iptables.

This entry was posted in Server, Ubuntu and tagged , , , . Bookmark the permalink. Both comments and trackbacks are currently closed.


  1. January 26, 2010 1:03 pm

    And did removing the eth1 route cause the eth2 route to start working (i.e. vice versa)?

    It’s possible the source IP address was wrong, and going via the wrong interface, and thus being filtered at the ISP’s end. tcpdump can help you confirm this.

    What I mean is that if they are separate Internet connections, and you haven’t entered any arrangement with the ISP, or the connections are going to separate ISPs, the IP addresses may not be interchangeable.

    I encountered a similar issue to what I’m describing to you just the other day, except with IPv6. What was happening was that my irssi client running on my router was attempting to connect to IRC with a source IP of ::1 (the equivalent of in IPv6), instead of the proper IP address on the connecting interface, which was supposed to be 2001:44b8:7df3:b970::14. I had to type “/connect -host 2001:44b8:7df3:b970::14 irc.ipv6.freenode.net” inside irssi to force it to connect with the right source address.

    • January 26, 2010 1:32 pm

      I’m pretty sure it would work if either eth1 or 2 were in the list on their own, and even with eth1 and 2 in the list as long as eth1 was first.

      We already had the snat rules setup as as soon as the eth2 default was removed all of the multigateway routing started working too with eth1 and eth2 gateways also in their own connection tables.

      Interesting point about IPv6 too. I’m using Internode too and have been following your progress with IPv6 and Internode. You havn’t tried multigateway connections on IPv6 have you?

    • January 26, 2010 2:02 pm

      Haven’t done anything with multiple gateways, because despite the ability of Internode to support concurrent PPPoE sessions (up to 4), when you fire up a second DHCPv6 instance, it kills Internode’s route to the first instance.

      I spose I could use a Hexago tunnel to emulate it, but couldn’t really be bothered at this stage. Native IPv6 FTW.