chris t: June 2008 Archives

traffic shaping: yet again.

| | Comments (1)

Traffic shaping seems to have been vaguely successful.  We're trying EXTREMELY HARD to keep our 95th percentile below 10mbit, since it appears that going beyond that will cost us incredible added monies.

Note that I don't really want to limit to the ~2mbit outgoing that's being used now -- I'm not sure why the outgoing traffic is limited so hard, and I haven't got a test domain on that box to find out.  The goal is to put the dangerous customers into an "overage class" that gets a shared 10mbit, and let them fight over it -- but they should be getting 10mbit.  I'm theorizing that maybe their traffic relies on high incoming bandwidth -- but I don't really know.  I should be able to test once Luke wakes up.

The next thing I want to do in this area is generalize the script somewhat and make the domain creation scripts call it at boot.  I'm not sure what the done thing is here -- I'm guessing making it a vif parameter would be best, then having that pass through vif-bridge and call a vif-qos script (or similar.)

Okay, I guess that's what I'll do then.

traffic shaping: round 2

| | Comments (0)
Started whacking domains with the b&width hammer, as per the directions noted a couple entries ago.

Further refinements will probably involve putting quotas directly in config files, with scripts to parse and automatically set limits at domain creation.  Ideally I'd also write a tool to re-assign domains to existing classes.
This title is, of course, a complete lie.  It looks like our disk layout scheme broke LVM snapshots.  To quote from our testing:

# lvcreate -s -L 100M -d hydra_domU/test -n test_snap
  Snapshots and mirrors may not yet be mixed.

That's some real well-supported technology there.  Google gives me two results for that error message, both of which are source diffs.

I'm not sure what to do about this.  Every so often I feel like abandoning LVM mirroring entirely and moving to LVM on MD, but that didn't exactly fill us with joy either.

I'm also considering bypassing the LVM-specific snapshot implementation and using the device mapper directly, but that worries me.  I would want to know why snapshots and mirrors can't be mixed before implementing snapshots anyway.
Today I put my money in my mouth and worked on traffic shaping.  I'm not 100% sure that this setup is correct -- we'll have to test it more before we put it in production.  Tentatively, though, here's how it works:

We're doing everything in the dom0.  Traffic shaping is, after all, a coercive technical solution.  Doing it in customer domUs would be silly.

First, we have to make sure that the packets on xenbr0 traverse iptables:

# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables

This is so that we can mark packets according to which domU emitted them.  (There are other reasons, but that's the important one in terms of our traffic-shaping setup.)

Next, we limit incoming traffic.  This is the easy part.  To limit vif "baldr" to 1mbit /s, with bursts up to 2mbit and max allowable latency 50ms:

# tc qdisc add dev baldr root tbf rate 1mbit latency 50ms peakrate 2mbit maxburst 40MB

This adds a queuing discipline, or qdisc, to the device "baldr".  Then we specify where to add it ("root",) and what sort of qdisc it is ("tbf").  Finally we specify the rate, latency, burst rate, and amount that can go at burst rate.

Next we work on limiting outgoing traffic.  The policing filters might work, but they handle the problem by dropping packets, which is. . . bad.  Instead we're going to apply traffic shaping to the outgoing physical Ethernet device, peth0.

First, for each domU, we add a rule to mark packets from that network interface:

# iptables -t mangle -A FORWARD -m physdev --physdev-in baldr -j MARK --set-mark 5

Here the number 5 is an arbitrary integer.  Eventually we'll probably want to use the domain id, or something fancy.  We could also simply use tc filters directly that match on source IP address, but it feels more elegant to have everything keyed to the domains "physical" network device.  Note that we're using physdev-in -- traffic that goes out from the domU comes in to the dom0.

Next we create an HTB qdisc.  We're using HTB because it does what we want and has good documentation (available at .)  We won't go over the HTB options in detail, since we're just lifting examples from the tutorial at this point:

# tc qdisc add dev peth0 root handle 1:  htb default 12

Then we make some classes to put traffic into.  Each class will get traffic from one domU.  (As the HTB docs explain, we're also making a parent class so that they can share surplus bandwidth.)

# tc class add dev peth0 parent 1: classid 1:1 htb rate 100mbit
# tc class add dev peth0 parent 1: classid 1:2 htb rate 1mbit

Now that we have a class for our domU's traffic, we need a filter that'll assign packets to it.

# tc filter add dev peth0 protocol ip parent 1:0 prio 1 handle 5 fw flowid 1:2

At this point traffic to and from the target domU is essentially shaped.  To prove it, we copied a 100MB file out, followed by another in.   Outgoing transfer speed was 203.5KB/s, while incoming was about 115KB/s.

This incoming speed is as expected, but the outgoing rate is a bit high.  Still, though, it's about what we're looking for.  Tomorrow we'll test this with more machines and heavier loads.

About this Archive

This page is a archive of recent entries written by chris t in June 2008.

chris t: May 2008 is the previous archive.

chris t: August 2008 is the next archive.

Find recent content on the main index or look in the archives to find all content.