[Jool-list] Jool performance help

Alberto Leiva ydahhrk at gmail.com
Tue Jul 17 13:21:44 CDT 2018


It does seem to be some sort of offload/MTU issue. If I increase the
interfaces' MTU, I get significantly better results:

$ ip netns exec IPv4 ip link set ipv4_to_jool mtu 65535
$ ip netns exec nsJool ip link set to_ipv4 mtu 65535
$ ip netns exec nsJool ip link set to_ipv6 mtu 65535
$ ip netns exec IPv6 ip link set ipv6_to_jool mtu 65535
$ iperf -B 172.17.1.2 -c 198.10.10.2 -i 1 -t 2
------------------------------------------------------------
Client connecting to 198.10.10.2, TCP port 5001
Binding to local address 172.17.1.2
TCP window size: 2.50 MByte (default)
------------------------------------------------------------
[  3] local 172.17.1.2 port 59082 connected with 198.10.10.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec   679 MBytes  5.69 Gbits/sec
[  3]  1.0- 2.0 sec   662 MBytes  5.55 Gbits/sec
[  3]  0.0- 2.0 sec  1.31 GBytes  5.61 Gbits/sec

I'm still investigating.
On Tue, Jul 17, 2018 at 12:36 PM Alberto Leiva <ydahhrk at gmail.com> wrote:
>
> Muhammad:
>
> I might have successfully reproduced the problem. Can you please
> confirm if these are the kinds of numbers that you're seeing?
>
> ------------------------------------------------------------
> Client connecting to 198.10.10.2, TCP port 5001
> Binding to local address 172.17.1.2
> TCP window size: 85.0 KByte (default)
> ------------------------------------------------------------
> [  3] local 172.17.1.2 port 5001 connected with 198.10.10.2 port 5001
> [ ID] Interval       Transfer     Bandwidth
> [  3]  0.0- 1.0 sec   384 KBytes  3.15 Mbits/sec
> [  3]  1.0- 2.0 sec   256 KBytes  2.10 Mbits/sec
> [  3]  2.0- 3.0 sec   128 KBytes  1.05 Mbits/sec
> [  3]  3.0- 4.0 sec   128 KBytes  1.05 Mbits/sec
> [  3]  4.0- 5.0 sec   256 KBytes  2.10 Mbits/sec
> [  3]  5.0- 6.0 sec   128 KBytes  1.05 Mbits/sec
> [  3]  6.0- 7.0 sec   256 KBytes  2.10 Mbits/sec
> [  3]  7.0- 8.0 sec   128 KBytes  1.05 Mbits/sec
> [  3]  8.0- 9.0 sec   256 KBytes  2.10 Mbits/sec
> [  3]  9.0-10.0 sec   128 KBytes  1.05 Mbits/sec
> [  3] 10.0-11.0 sec   256 KBytes  2.10 Mbits/sec
> [  3] 11.0-12.0 sec   128 KBytes  1.05 Mbits/sec
> [  3] 12.0-13.0 sec   256 KBytes  2.10 Mbits/sec
> [  3] 13.0-14.0 sec   128 KBytes  1.05 Mbits/sec
> [  3] 14.0-15.0 sec   256 KBytes  2.10 Mbits/sec
> [  3] 15.0-16.0 sec   128 KBytes  1.05 Mbits/sec
> [  3] 16.0-17.0 sec   128 KBytes  1.05 Mbits/sec
> [  3] 17.0-18.0 sec   256 KBytes  2.10 Mbits/sec
> [  3] 18.0-19.0 sec   128 KBytes  1.05 Mbits/sec
> [  3] 19.0-20.0 sec   256 KBytes  2.10 Mbits/sec
> [  3] 20.0-21.0 sec   256 KBytes  2.10 Mbits/sec
> [  3] 21.0-22.0 sec   128 KBytes  1.05 Mbits/sec
> [  3] 22.0-23.0 sec   256 KBytes  2.10 Mbits/sec
> [  3] 23.0-24.0 sec   128 KBytes  1.05 Mbits/sec
> [  3] 24.0-25.0 sec   128 KBytes  1.05 Mbits/sec
> [  3] 25.0-26.0 sec   256 KBytes  2.10 Mbits/sec
> [  3] 26.0-27.0 sec   256 KBytes  2.10 Mbits/sec
> [  3] 27.0-28.0 sec   384 KBytes  3.15 Mbits/sec
> [  3] 28.0-29.0 sec   384 KBytes  3.15 Mbits/sec
> [  3] 29.0-30.0 sec   256 KBytes  2.10 Mbits/sec
> [  3]  0.0-30.3 sec  6.38 MBytes  1.76 Mbits/sec
> On Mon, Jul 16, 2018 at 3:00 PM <ali at acreto.io> wrote:
> >
> > Hi Alberto and Tore,
> >
> > Thanks for your feedback.
> >
> > I've done removed the modules using following
> >
> > #rmmod jool
> > #rmmod jool_siit
> >
> > Re inserted only jool_siit module using following (In global namespace)
> >
> > #modprobe jool_siit no_instance
> >
> > In Jool name space I did the following
> >
> > ip netns exec nsJool  jool_siit --instance --add
> >
> > ip netns exec nsJool  jool_siit --eamt --add 172.17.1.2 2001:db8::c60a:a03
> > ip netns exec nsJool  jool_siit --eamt --add 198.10.10.2 2001:db8::c60a:a02
> >
> > Note: Ping and connectivity bw IPv4 and IPv6 host works fine.
> >
> > Please find attached the "ethtool -k " output for interface. All the veth pairs have similar ethtool configurations.
> >
> > Also find attached traffic capture on IPv6 interface of nsJool. I've observed a lot of TCP out of order and retransmissions.
> >
> > Can you take a look at the configurations and provide your feedback on why could be there TCP out of order and retransmissions.
> >
> > Appreciate your help
> >
> > Thanks
> > Muhammad Ali
> >
> > -----Original Message-----
> > From: Alberto Leiva <ydahhrk at gmail.com>
> > Sent: Friday, July 13, 2018 12:09 AM
> > To: tore at fud.no
> > Cc: ali at acreto.io; jool-list at nic.mx
> > Subject: Re: [Jool-list] Jool performance help
> >
> > Thanks, Tore!
> >
> > I would like to add the following:
> >
> > > $ modprobe jool
> > > $ modprobe jool_siit
> >
> > Are you sure that this is what you want?
> >
> > I'm not sure why you would want to insert both modules in the same namespace. One is a SIIT and the other one is a NAT64. Particularly if you're performance-testing, I'd normally expect you to test one *or* the other.
> > On Thu, Jul 12, 2018 at 2:09 AM Tore Anderson <tore at fud.no> wrote:
> > >
> > > * ali at acreto.io
> > >
> > > > I’ve installed Jool kernel modules and userspace application on Ubuntu. jool_siit is running in a network namespace and I’m using veth pairs for network I/O. Please find attached details of my test environment.
> > > >
> > > > However, while running the TCP throughput test, I was able to achieve only 6Mbps of throughput. I’ve tested it by doing both *GRO on and off* on all the relevant interfaces with no performance improvements.
> > > >
> > > > We are evaluating Jool for carrier grade NAT64 in our network infrastructure.
> > > >
> > > > I was wondering if you can help me to improve performance results. Is there any tweaks or workarounds to overcome the performance limitation by Jool.
> > >
> > > Hi Muhammad,
> > >
> > > First off, you're definitively not hitting the performance limit of
> > > Jool - it easily scales to multiple Gb/s of throughput. There must be
> > > something else that is causing your issues.
> > >
> > > Even though you said you turned GRO off, my suspicion would be
> > > something with packet sizes. Are there other offload settings you can
> > > turn off? MTU settings on all the interfaces are all okay?
> > >
> > > Also, check with tcpdump on all the relevant interfaces to see if the
> > > test traffic is causing lots of ICMP Frag Needed/Packet Too Big errors.
> > >
> > > Tore
> > > _______________________________________________
> > > Jool-list mailing list
> > > Jool-list at nic.mx
> > > https://mail-lists.nic.mx/listas/listinfo/jool-list


More information about the Jool-list mailing list