Re: [click] Max throughput supported by click router in netmap mode.
- Baldev Singh
- 2014-12-17 @ 12:34
I think this will define the unpredictable behavior.
When i run the click on 10Gbit Ethernet ports on server, i am able to
resolve the addresses and able to connect with clients.
But if i am trying to calculate the bandwidth, after some time, it is
disconnected from the server.
No error messages,no clue at all how to find what is happening.
I did something to find the reason what is going on,maybe it will help.
When click is running, i used strace to check what click is actually
doing. So it is continuously polling on the FD's of opened netmap devices.
CPU speed is already max and I will check other points.
One more thing is that in current release, Click 2.0.1, I think netmap
is not supported.
On Wednesday 17 December 2014 05:40 PM, Luigi Rizzo wrote:
> On Wed, Dec 17, 2014 at 12:54 PM, Baldev Singh
> <mailto:firstname.lastname@example.org>> wrote:
> Hello All,
> I am testing the throughput with 10Gbit card and click is showing
> some unpredictable behavior.
> So Please help me to know whether click supports 10G or not?
> please elaborate "unpredictable behaviour".
> Especially on Linux (but also on FreeBSD for what is applicable)
> there is typically some variation in the performance
> due to unfortunate interactions between system components including:
> - powerd (disable it and set cpu speed to max, otherwise the OS will
> often throttle down the clock because netmap is less CPU hungry than
> other systems);
> - the way NAPI tasks are scheduled (once again netmap is very lightweight
> in the interrupt handler, which is interpreted by the OS as a reason
> not to assign the task to a separate thread, stealing cycles from the
> current thread; try to pin the NAPI threads and force their use)
> - adaptive interrupt moderation (same as above: disable it.
> Recent ixgbe drivers try to reduce the interrupt moderation interval
> to very low values causing interrupt storms);
> - memory locality on multisocket machines (try to run on single socket,
> we do not have a way to bind memory to a specific numa socket and
> depending on where you end up your performance will be severely
> - the number of tx/rx queues allocated on each card (typically match the
> number of cores, which on high-end machines is ridiculously high.
> the value with ethtool, it is very rare that you need more than 2-4
> at least for I/O).
> - flow control on the link (disable it, or a slow receiver will slow down
> the sender).