QoS в Linux

Материал из Xgu.ru

Перейти к: навигация, поиск
stub.png
Данная страница находится в разработке.
Эта страница ещё не закончена. Информация, представленная здесь, может оказаться неполной или неверной.

Если вы считаете, что её стоило бы доработать как можно быстрее, пожалуйста, скажите об этом.


Содержание

[править] Терминология

Не для всех терминов, использующихся в этой статье, существует общепринятый русский перевод. При описании основных терминов указывается английская версия и альтернативные русские переводы, если они существуют. Другие термины со спорным переводом перечислены в списке ниже. Если вы считаете, что какой-то термин может быть переведён лучше, пожалуйста, напишите об этом на странице обсуждения.

Действия, применяемые по отношению к трафику:

shaping
(шейпинг) Контроль скорости передачи. Трафик сглаживается. Шейпинг применяется для исходящего трафика.
scheduling
(шедулинг) Контроль очерёдности отправки пакетов. За счёт изменения очерёдности пакетов, стоящих в очереди на отправку можно повысить качество обслуживания интерактивного трафика без ущерба для передачи пакетного(bulk) трафика, нечувствительного к задержкам. Также применяются термины: Изменение порядка пакетов (reordering) и приоритезацией (prioritizing). Применяется для исходящего трафика.
policing
Ограничение трафика. Фактически выражается в уничтожении трафика, превышающего заданную величину. Разница между ограничением и шейпингом трафика приблизительно такая же как между тем как срезать или размазать масло по бутерброду перед тем как просунуть его в узкую щель под дверью. Применяется для входящего трафика.
dropping
Уничтожение трафика, превышающего указанную величину. Может выполняться как на входе, так и на выходе.
marking
Маркировка пакетов.
classification
Классификация пакетов.

Механизмы, на которых базируется QoS:

qdiscs
Дисциплины обслуживания.
classes
Классы обслуживания.
filters
Фильтры. Применяются для классификации трафика. Вновь прибывший трафик не относится ни к одному из классов, и его классификация выполняется с помощью фильтров. Существуют ряд различных фильтров трафика, отличающихся возможностями и принципом работы.


[править] Перевод терминов

Термины, перевод которых на русский язык пока не является общепринятым (или я об этом не знаю):

  • leaf-class — класс-лист;

[править] Дисциплины обслуживания

Существуют классовые (classful) и бесклассовые (classless) дисциплины обслуживания. Бесклассовые дисциплины могут быть прикреплены только к корню устройства. Классовые дисциплины могут прикрепляться не только к корню, но и к классам.

Бесклассовые дисциплины обслуживания.

fifo
First In First Out — .
fifo_fast
First In First Out Fast — .
RED
Random Early Detection — .
SFQ
Stohastic Fairness Queueing — .
TBF
Token Bucket Filter — .

Классовые дисциплины обслуживания.

CBQ
Class Based Queueing — .
HTB
Hierarchy Token Bucket — .
PRIO
PRIO — .

[править] Принцип работы

Классы трафика организованы в дерево — у каждого класса есть не более одного родителя; класс может иметь множество потомков. Классы, которые не имеют родителей, называются корневыми. Классы, которые не имеют потомков, называются классами-ветками.

[править] QoS в Linux

Нам понадобится ядро с поддержкой QoS и (опционально) netfilter, а так же userspace инструменты iproute2 и iptables.

[править] Конфигурация ядра

Собираем ядро отвечая на вопросы y (или m, где возможно).

 TCP/IP networking (CONFIG_INET) [Y/n/?]y
  IP: advanced router (CONFIG_IP_ADVANCED_ROUTER) [N/y/?] y
    IP: policy routing (CONFIG_IP_MULTIPLE_TABLES) [N/y/?] (NEW) y
    IP: use TOS value as routing key (CONFIG_IP_ROUTE_TOS) [N/y/?] (NEW) y
    IP: large routing tables (CONFIG_IP_ROUTE_LARGE_TABLES) [N/y/?] (NEW) y

 QoS and/or fair queueing (CONFIG_NET_SCHED) [N/y/?] y
  CBQ packet scheduler (CONFIG_NET_SCH_CBQ) [N/y/m/?] (NEW) y
  HTB packet scheduler (CONFIG_NET_SCH_HTB) [N/y/m/?] (NEW) y
  The simplest PRIO pseudoscheduler (CONFIG_NET_SCH_PRIO) [N/y/m/?] (NEW) y
  RED queue (CONFIG_NET_SCH_RED) [N/y/m/?] (NEW) y
  SFQ queue (CONFIG_NET_SCH_SFQ) [N/y/m/?] (NEW) y
  TBF queue (CONFIG_NET_SCH_TBF) [N/y/m/?] (NEW) y
 QoS support (CONFIG_NET_QOS) [N/y/?] (NEW) y
   Rate estimator (CONFIG_NET_ESTIMATOR) [N/y/?] (NEW) y
   Packet classifier API (CONFIG_NET_CLS) [N/y/?] (NEW) y
   TC index classifier (CONFIG_NET_CLS_TCINDEX) [N/y/m/?] (NEW) y
   Routing table based classifier (CONFIG_NET_CLS_ROUTE4) [N/y/m/?] (NEW) y
   Firewall based classifier (CONFIG_NET_CLS_FW) [N/y/m/?] (NEW) y
   U32 classifier (CONFIG_NET_CLS_U32) [N/y/m/?] (NEW) y

[править] Программа tc

СИНТАКСИС

tc qdisc [ add | change | replace | link ] dev УСТРОЙСТВО [ parent qdisc-id | root ] [ handle qdisc-id ] qdisc [ параметры qdisc ]
tc class [ add | change | replace ] dev УСТРОЙСТВО parent qdisc-id [ classid class-id ] qdisc [ параметры qdisc ]
tc filter [ add | change | replace ] dev УСТРОЙСТВО [ parent qdisc-id | root ] protocol protocol prio priority filtertype [ параметры filtertype ] flowid flow-id
tc [-s | -d ] qdisc show [ dev УСТРОЙСТВО ]
tc [-s | -d ] class show dev УСТРОЙСТВО
tc filter show dev УСТРОЙСТВО

[править] Скрипты

[править] Pulsar QoS HOWTO

В этом HOW-TO я вкратце попытаюсь разъяснить, как в GNU/Linux системах управлять трафиком.


О включении поддержки netfilter и процедуре компиляции ядра читайте соответствующие HOWTO.


Я (и не только) придерживаюсь мнения, что для построения иерархии классов для управления полосой пропускания лучше использовать HTB, нежели CBQ. CBQ содержит ряд параметров, которые необходимо эмпирически получать для каждого конкретного случая.

Рассмотрим пример, в котором одному клиенту отводится гарантированная полоса пропускания в 1mbit/s, другому полоса пропускания 4mbit/s и выше, всем остальным 2mbit/s и выше. Для начала создадим с помощью HTB классы :

$TC qdisc add dev $DEVB root handle 1: htb default 30
$TC class add dev $DEVB parent 1: classid 1:1 htb \
  rate 100mbit ceil 100mbit burst 15k
$TC class add dev $DEVB parent 1:1 classid 1:10 htb \
  rate 64kbit ceil 64kbit  burst 15k
$TC class add dev $DEVB parent 1:1 classid 1:20 htb \
  rate 4mbit ceil 100mbit burst 15k
$TC class add dev $DEVB parent 1:1 classid 1:30 htb \
  rate 2mbit ceil 100mbit burst 15k

Немного о параметре burst. Дело в том, что "железо" в каждый момент времени может посылать только один пакет и только с определенной скоростью (100Mbit/s в случае fast ethernet). HTB эмулирует несколько потоков (flow) посредством переключения между классами. Таким образом параметр burst задает максимальный объем данных данного класса, который может быть пропущен через "железо" без переключения на другие классы. Отсюда логически следует, что нельзя ставить параметр burst у дочерних классов больше чем у родительских.

Далее мы должны присоединить дисциплины для трафика

$TC qdisc add dev $DEVB parent 1:10 red \
  min 1600 max 3210 burst 2 limit 32100 avpkt 1000
$TC qdisc add dev $DEVB parent 1:20  sfq perturb 10
$TC qdisc add dev $DEVB parent 1:30  sfq perturb 10


Этим мы сказали что "резать" трафик в классе 1:10 будем по алгоритму RED(Random Early Detection), в остальных — по алгоритму SFQ. Параметры для RED вычисляются следующим образом : min = (задержка)*bandwidt(bits/s), burst= (2*min+max)/(3*avpkt)), limit= 10*max, max > 2*min, avpkt 1000 для MTU 1500.

Несколько слов об алгоритмах управления трафиком. TBF алгоритм пропускает пакеты с определенной заданной скоростью, хороший выбор если вам просто нужно ограничить скорость на интерфейсе. Используемые параметры rate - нужная нам скорость, latency — максимальное время, которое пакет может проводить в очереди, burst размер "ведра" для токенов в байтах, естественно чем больше вы задаете rate, тем больше должно быть значение burst. Пример, ограничение скорости на eth0 к которому подключен dsl модем :

$TC qdisc add dev eth0 root tbf rate 128kbit latency 50ms burst 1500

Основным понятием SFQ является поток (flow), трафик разбивается на достаточно большое количество FIFO очередей, которые переключаются по кругу, таким образом не давая доминировать ни одной из них. Параметры perturb время реконфигурации (???) и quantum — обьем данных "выпускаемых" из потока за один раз (по умолчанию MTU, но!! ни в коем случае не ставьте меньше). Пример - псевдосправедливая раздача исходящего трафика с определенного интерфейса:

$TC qdisc add dev eth0 root sfq perturb 10

Параметры для RED вычисляются следующим образом min = (задержка)*bandwidth(bits/s), burst= (2*min+max)/(3*avpkt)), limit= 10*max, max > 2*min, avpkt 1000 для MTU 1500.


Дальше надо распределить трафик по классам, с помощью tc filter, например так :

$TC filter add dev $DEVB protocol ip parent 1:0 prio 1 u32 \
match ip dst 192.168.15.132 flowid 1:10

отправляем весь трафик у которого получатель 192.168.15.132 в класс 1:10. Можно распределить по классам предварительно помеченные с помощью iptables пакеты следующим образом:

$iptables -A OUTPUT -t mangle  -d 192.168.15.129 -j MARK --set-mark 20
$tc filter add dev $DEVB protocol ip parent 1:0 prio 2 handle 20 fw classid 1:20

Можно использовать более сложные правила для пометки трафика, например :

$iptables -A OUTPUT -t mangle -p tcp  -d 192.168.15.129 \
  --sport 80 -j MARK --set-mark 20

Обязательно проверьте с помощью $iptables -L -n -v -t mangle как маркируются пакеты, возможно происходит не то, что вы ожидали.


Резюмируем  
1. Cначала создаем иерархию классов #tc class add ...
2. Добавляем дисциплины #tc qdisc add ... 
3. Распределяем трафик по классам #tc filter add ...


(c), Sheshka Alexey, 2002 sheshka@yahoo.com Перепечатка в бумажных изданиях без согласия автора запрещена.

[править] HTB Linux queuing discipline manual - user guide

Martin Devera aka devik (devik@cdi.cz) Manual: devik and Don Cohen Last updated: 5.5.2002

New text is in red color. Coloring is removed on new text after 3 months. Currently they depicts HTB3 changes

   * 1. Introduction
   * 2. Link sharing
   * 3. Sharing hierarchy
   * 4. Rate ceiling
   * 5. Burst
   * 6. Priorizing bandwidth share
   * 7. Understanding statistics
   * 8. Making, debugging and sending error reports 

1. Introduction HTB is meant as a more understandable, intuitive and faster replacement for the CBQ qdisc in Linux. Both CBQ and HTB help you to control the use of the outbound bandwidth on a given link. Both allow you to use one physical link to simulate several slower links and to send different kinds of traffic on different simulated links. In both cases, you have to specify how to divide the physical link into simulated links and how to decide which simulated link to use for a given packet to be sent.

This document shows you how to use HTB. Most sections have examples, charts (with measured data) and discussion of particular problems.

This release of HTB should be also much more scalable. See comparison at HTB home page.

Please read: tc tool (not only HTB) uses shortcuts to denote units of rate. kbps means kilobytes and kbit means kilobits ! This is the most FAQ about tc in linux.

2. Link sharing Problem: We have two customers, A and B, both connected to the internet via eth0. We want to allocate 60 kbps to B and 40 kbps to A. Next we want to subdivide A's bandwidth 30kbps for WWW and 10kbps for everything else. Any unused bandwidth can be used by any class which needs it (in proportion of its allocated share).

HTB ensures that the amount of service provided to each class is at least the minimum of the amount it requests and the amount assigned to it. When a class requests less than the amount assigned, the remaining (excess) bandwidth is distributed to other classes which request service.

Also see document about HTB internals - it describes goal above in greater details.

Note: In the literature this is called "borrowing" the excess bandwidth. We use that term below to conform with the literature. We mention, however, that this seems like a bad term since there is no obligation to repay the resource that was "borrowed".

The different kinds of traffic above are represented by classes in HTB. The simplest approach is shown in the picture at the right. Let's see what commands to use:

tc qdisc add dev eth0 root handle 1: htb default 12

This command attaches queue discipline HTB to eth0 and gives it the "handle" 1:. This is just a name or identifier with which to refer to it below. The default 12 means that any traffic that is not otherwise classified will be assigned to class 1:12.

Note: In general (not just for HTB but for all qdiscs and classes in tc), handles are written x:y where x is an integer identifying a qdisc and y is an integer identifying a class belonging to that qdisc. The handle for a qdisc must have zero for its y value and the handle for a class must have a non-zero value for its y value. The "1:" above is treated as "1:0".

tc class add dev eth0 parent 1: classid 1:1 htb rate 100kbps ceil 100kbps 
tc class add dev eth0 parent 1:1 classid 1:10 htb rate 30kbps ceil 100kbps
tc class add dev eth0 parent 1:1 classid 1:11 htb rate 10kbps ceil 100kbps
tc class add dev eth0 parent 1:1 classid 1:12 htb rate 60kbps ceil 100kbps

The first line creates a "root" class, 1:1 under the qdisc 1:. The definition of a root class is one with the htb qdisc as its parent. A root class, like other classes under an htb qdisc allows its children to borrow from each other, but one root class cannot borrow from another. We could have created the other three classes directly under the htb qdisc, but then the excess bandwidth from one would not be available to the others. In this case we do want to allow borrowing, so we have to create an extra class to serve as the root and put the classes that will carry the real data under that. These are defined by the next three lines. The ceil parameter is described below.

Note: Sometimes people ask me why they have to repeat dev eth0 when they have already used handle or parent. The reason is that handles are local to an interface, e.g., eth0 and eth1 could each have classes with handle 1:1.

We also have to describe which packets belong in which class. This is really not related to the HTB qdisc. See the tc filter documentation for details. The commands will look something like this:

tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 \
   match ip src 1.2.3.4 match ip dport 80 0xffff flowid 1:10
tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 \
   match ip src 1.2.3.4 flowid 1:11

(We identify A by its IP address which we imagine here to be 1.2.3.4.)

Note: The U32 classifier has an undocumented design bug which causes duplicate entries to be listed by "tc filter show" when you use U32 classifiers with different prio values.

You may notice that we didn't create a filter for the 1:12 class. It might be more clear to do so, but this illustrates the use of the default. Any packet not classified by the two rules above (any packet not from source address 1.2.3.4) will be put in class 1:12.

Now we can optionally attach queuing disciplines to the leaf classes. If none is specified the default is pfifo.

tc qdisc add dev eth0 parent 1:10 handle 20: pfifo limit 5
tc qdisc add dev eth0 parent 1:11 handle 30: pfifo limit 5
tc qdisc add dev eth0 parent 1:12 handle 40: sfq perturb 10

That's all the commands we need. Let's see what happens if we send packets of each class at 90kbps and then stop sending packets of one class at a time. Along the bottom of the graph are annotations like "0:90k". The horizontal position at the center of the label (in this case near the 9, also marked with a red "1") indicates the time at which the rate of some traffic class changes. Before the colon is an identifier for the class (0 for class 1:10, 1 for class 1:11, 2 for class 1:12) and after the colon is the new rate starting at the time where the annotation appears. For example, the rate of class 0 is changed to 90k at time 0, 0 (= 0k) at time 3, and back to 90k at time 6.

Initially all classes generate 90kb. Since this is higher than any of the rates specified, each class is limited to its specified rate. At time 3 when we stop sending class 0 packets, the rate allocated to class 0 is reallocated to the other two classes in proportion to their allocations, 1 part class 1 to 6 parts class 2. (The increase in class 1 is hard to see because it's only 4 kbps.) Similarly at time 9 when class 1 traffic stops its bandwidth is reallocated to the other two (and the increase in class 0 is similarly hard to see.) At time 15 it's easier to see that the allocation to class 2 is divided 3 parts for class 0 to 1 part for class 1. At time 18 both class 1 and class 2 stop so class 0 gets all 90 kbps it requests.

It might be good time to touch concept of quantums now. In fact when more classes want to borrow bandwidth they are each given some number of bytes before serving other competing class. This number is called quantum. You should see that if several classes are competing for parent's bandwidth then they get it in proportion of their quantums. It is important to know that for precise operation quantums need to be as small as possible and larger than MTU. Normaly you don't need to specify quantums manualy as HTB chooses precomputed values. It computes classe's quantum (when you add or change it) as its rate divided by r2q global parameter. Its default value is 10 and because typical MTU is 1500 the default is good for rates from 15 kBps (120 kbit). For smaller minimal rates specify r2q 1 when creating qdisc - it is good from 12 kbit which should be enough. If you will need you can specify quantum manualy when adding or changing the class. You can avoid warnings in log if precomputed value would be bad. When you specify quantum on command line the r2q is ignored for that class.

This might seem like a good solution if A and B were not different customers. However, if A is paying for 40kbps then he would probably prefer his unused WWW bandwidth to go to his own other service rather than to B. This requirement is represented in HTB by the class hierarchy. 3. Sharing hierarchy The problem from the previous chapter is solved by the class hierarchy in this picture. Customer A is now explicitly represented by its own class. Recall from above that the amount of service provided to each class is at least the minimum of the amount it requests and the amount assigned to it. This applies to htb classes that are not parents of other htb classes. We call these leaf classes. For htb classes that are parents of other htb classes, which we call interior classes, the rule is that the amount of service is at least the minumum of the amount assigned to it and the sum of the amount requested by its children. In this case we assign 40kbps to customer A. That means that if A requests less than the allocated rate for WWW, the excess will be used for A's other traffic (if there is demand for it), at least until the sum is 40kbps.

Notes: Packet classification rules can assign to inner nodes too. Then you have to attach other filter list to inner node. Finally you should reach leaf or special 1:0 class. The rate supplied for a parent should be the sum of the rates of its children.

The commands are now as follows:

tc class add dev eth0 parent 1: classid 1:1 htb rate 100kbps ceil 100kbps
tc class add dev eth0 parent 1:1 classid 1:2 htb rate 40kbps ceil 100kbps
tc class add dev eth0 parent 1:2 classid 1:10 htb rate 30kbps ceil 100kbps
tc class add dev eth0 parent 1:2 classid 1:11 htb rate 10kbps ceil 100kbps
tc class add dev eth0 parent 1:1 classid 1:12 htb rate 60kbps ceil 100kbps

We now turn to the graph showing the results of the hierarchical solution. When A's WWW traffic stops, its assigned bandwidth is reallocated to A's other traffic so that A's total bandwidth is still the assigned 40kbps. If A were to request less than 40kbs in total then the excess would be given to B. 4. Rate ceiling The ceil argument specifies the maximum bandwidth that a class can use. This limits how much bandwidth that class can borrow. The default ceil is the same as the rate. (That's why we had to specify it in the examples above to show borrowing.) We now change the ceil 100kbps for classes 1:2 (A) and 1:11 (A's other) from the previous chapter to ceil 60kbps and ceil 20kbps.

The graph at right differs from the previous one at time 3 (when WWW traffic stops) because A/other is limited to 20kbps. Therefore customer A gets only 20kbps in total and the unused 20kbps is allocated to B. The second difference is at time 15 when B stops. Without the ceil, all of its bandwidth was given to A, but now A is only allowed to use 60kbps, so the remaining 40kbps goes unused.

This feature should be useful for ISPs because they probably want to limit the amount of service a given customer gets even when other customers are not requesting service. (ISPs probably want customers to pay more money for better service.) Note that root classes are not allowed to borrow, so there's really no point in specifying a ceil for them.

Notes: The ceil for a class should always be at least as high as the rate. Also, the ceil for a class should always be at least as high as the ceil of any of its children. 5. Burst Networking hardware can only send one packet at a time and only at a hardware dependent rate. Link sharing software can only use this ability to approximate the effects of multiple links running at different (lower) speeds. Therefore the rate and ceil are not really instantaneous measures but averages over the time that it takes to send many packets. What really happens is that the traffic from one class is sent a few packets at a time at the maximum speed and then other classes are served for a while. The burst and cburst parameters control the amount of data that can be sent at the maximum (hardware) speed without trying to serve another class.

If cburst is smaller (ideally one packet size) it shapes bursts to not exceed ceil rate in the same way as TBF's peakrate does.

When you set burst for parent class smaller than for some child then you should expect the parent class to get stuck sometimes (because child will drain more than parent can handle). HTB will remember these negative bursts up to 1 minute.

You can ask why I want bursts. Well it is cheap and simple way how to improve response times on congested link. For example www traffic is bursty. You ask for page, get it in burst and then read it. During that idle period burst will "charge" again.

Note: The burst and cburst of a class should always be at least as high as that of any of it children.

On graph you can see case from previous chapter where I changed burst for red and yellow (agency A) class to 20kb but cburst remained default (cca 2 kb). Green hill is at time 13 due to burst setting on SMTP class. A class. It has underlimit since time 9 and accumulated 20 kb of burst. The hill is high up to 20 kbps (limited by ceil because it has cburst near packet size). Clever reader can think why there is not red and yellow hill at time 7. It is because yellow is already at ceil limit so it has no space for furtner bursts. There is at least one unwanted artifact - magenta crater at time 4. It is because I intentionaly "forgot" to add burst to root link (1:1) class. It remembered hill from time 1 and when at time 4 blue class wanted to borrow yellow's rate it denied it and compensated itself.

Limitation: when you operate with high rates on computer with low resolution timer you need some minimal burst and cburst to be set for all classes. Timer resolution on i386 systems is 10ms and 1ms on Alphas. The minimal burst can be computed as max_rate*timer_resolution. So that for 10Mbit on plain i386 you needs burst 12kb.

If you set too small burst you will encounter smaller rate than you set. Latest tc tool will compute and set the smallest possible burst when it is not specified. 6. Priorizing bandwidth share Priorizing traffic has two sides. First it affects how th