[Jool-list] Jool performance help

Alberto Leiva ydahhrk at gmail.com
Mon Jul 23 16:05:14 CDT 2018


> And after test the performance does improve by using standard MTU 1500.
That is very cool as, as my boss said "This looks very sexy" 😊

Awesome! Thanks!

> Also thanks for your feedback on the network. I've also made some
changes, now after Jool SITT, the v6 source and destination IPs are on
different subnet. Hope this looks better now.

Looks good to me!

(Well, the network 172.17.1.0/24 is mistyped as "172.17.10.0/24" in the
diagram, but I get the point.)

On Fri, Jul 20, 2018 at 7:52 AM <ali at acreto.io> wrote:

> Hi Alberto,
>
> Thanks for sharing the latest build for kernel module. I've compiled the
> new kernel module using the source at issues/267.
>
> And after test the performance does improve by using standard MTU 1500.
> That is very cool as, as my boss said "This looks very sexy" 😊
>
> Below are the numbers. I got for the updated kernel module.
>
> Also thanks for your feedback on the network. I've also made some changes,
> now after Jool SITT, the v6 source and destination IPs are on different
> subnet. Hope this looks better now. Please find attached updated network
> topology for SIIT.
>
> Also find attached a rudimentary bash script to setup the test environment
> as described in the attached file.
>
> Any additional feedback/suggestions are welcome.
>
> ############## Throughput ##################
>
> ip netns exec nsIP46GW iperf -B 172.17.1.2 -c 192.168.10.2 -i 1 -t 30
> ------------------------------------------------------------
> Client connecting to 192.168.10.2, TCP port 5001
> Binding to local address 172.17.1.2
> TCP window size: 85.0 KByte (default)
> ------------------------------------------------------------
> [  3] local 172.17.1.2 port 36089 connected with 192.168.10.2 port 5001
> [ ID] Interval       Transfer     Bandwidth
> [  3]  0.0- 1.0 sec   843 MBytes  7.07 Gbits/sec
> [  3]  1.0- 2.0 sec   820 MBytes  6.88 Gbits/sec
> [  3]  2.0- 3.0 sec   777 MBytes  6.51 Gbits/sec
> [  3]  3.0- 4.0 sec   912 MBytes  7.65 Gbits/sec
> [  3]  4.0- 5.0 sec   917 MBytes  7.69 Gbits/sec
> [  3]  5.0- 6.0 sec   913 MBytes  7.66 Gbits/sec
> [  3]  6.0- 7.0 sec   836 MBytes  7.02 Gbits/sec
> [  3]  7.0- 8.0 sec   815 MBytes  6.83 Gbits/sec
> [  3]  8.0- 9.0 sec   816 MBytes  6.85 Gbits/sec
> [  3]  9.0-10.0 sec   892 MBytes  7.48 Gbits/sec
>
> Thank you very much for your support.
>
> Thanks
> Muhammad Ali
> -----Original Message-----
> From: Alberto Leiva <ydahhrk at gmail.com>
> Sent: Thursday, July 19, 2018 5:35 AM
> To: ali at acreto.io
> Cc: tore at fud.no; jool-list at nic.mx
> Subject: Re: [Jool-list] Jool performance help
>
> Hello again:
>
> Just in case you're not listening to the issue tracker (
> https://github.com/NICMx/Jool/issues/267), I found a likely formal fix to
> this problem, and uploaded the code to the issue267 branch.
> (https://github.com/NICMx/Jool/tree/issue267)
>
> Performance should increase dramatically regardless of interface MTU.
>
> Though the code is scheduled to undergo formal testing by NIC Mexico next
> week, I would be truly thankful for some additional user feedback. As soon
> as the code is tested, Jool version 3.5.8 will be released.
>
> Greetings,
> Alberto
> On Tue, Jul 17, 2018 at 4:48 PM Alberto Leiva <ydahhrk at gmail.com> wrote:
> >
> > > That is an interesting find about MTU size. I'm curious how MTU size
> can impact the performance?
> >
> > Whenever active, offloads tweak and make a mess out of packet sizes.
> > Then Jool sees incorrect data and the resulting translated packets
> > exceed the MTU. By increasing the MTU, we prevent these packets from
> > being dropped.
> >
> > Because offloading is kind of random, some packets manage to reach the
> > destination. Because TCP retries a lot, the result is extreme slowness
> > instead of utter DoS.
> >
> > > I also tried to use packet size less than 512B without any
> improvements.
> >
> > It's not effective because these small packets are being merged into
> > very big packets somewhere.
> >
> > > Looking forward to any work around to this problem.
> >
> > Well, if you're only dealing with virtual interfaces, the solution for
> > now is to just increase the MTU. If you try the experiment on actual
> > hardware, then the problem is unlikely to exist in the first place.
> >
> > --------------------
> >
> > By the way:
> >
> > This is not really that important, but your network looks a little
> > strange to me. It's not really an "idiomatic" SIIT setup.
> >
> > Is this the intended behavior that you want?
> >
> > 1. IPv4 iperf writes packets [172.17.1.2 -> 198.10.10.2] 2. According
> > to the EAMT, Jool translates those packets into
> > [2001:db8::c60a:a03 -> 2001:db8::c60a:a02]
> >
> > Right now the situation is a little strange because the packet appears
> > to be directed towards a node on the same network (2001:db8::/64).
> > This is more akin to the nature of NAT rather than SIIT's.
> >
> > It's not something that can't be made to work with a few strange
> > routing rules, but it does mean that your translator will only be able
> > to mask packets coming from ONE IPv4 node. Unless I'm missing some
> > really important detail.
> >
> > The idea of SIIT is simply to "rename" networks, not have someone else
> > impersonate them. See the image attached to this mail for a visual
> > depiction of what I understand to be "idiomatic" SIIT.
> > On Tue, Jul 17, 2018 at 1:54 PM <ali at acreto.io> wrote:
> > >
> > > Hi Alberto,
> > >
> > > Yes, these are similar performance numbers that I was having in my
> > > environment. i.e. 3.15 Mbits/sec
> > >
> > > That is an interesting find about MTU size. I'm curious how MTU size
> can impact the performance?
> > >
> > > I also tried to use packet size less than 512B without any
> improvements.
> > >
> > > Appreciate your help with reproducing the issue. Looking forward to
> any work around to this problem.
> > >
> > > Thanks
> > > Muhammad Ali
> > >
> > > -----Original Message-----
> > > From: Alberto Leiva <ydahhrk at gmail.com>
> > > Sent: Tuesday, July 17, 2018 11:22 PM
> > > To: ali at acreto.io
> > > Cc: tore at fud.no; jool-list at nic.mx
> > > Subject: Re: [Jool-list] Jool performance help
> > >
> > > It does seem to be some sort of offload/MTU issue. If I increase the
> interfaces' MTU, I get significantly better results:
> > >
> > > $ ip netns exec IPv4 ip link set ipv4_to_jool mtu 65535 $ ip netns
> > > exec nsJool ip link set to_ipv4 mtu 65535 $ ip netns exec nsJool ip
> > > link set to_ipv6 mtu 65535 $ ip netns exec IPv6 ip link set
> > > ipv6_to_jool mtu 65535 $ iperf -B 172.17.1.2 -c 198.10.10.2 -i 1 -t
> > > 2
> > > ------------------------------------------------------------
> > > Client connecting to 198.10.10.2, TCP port 5001 Binding to local
> > > address 172.17.1.2 TCP window size: 2.50 MByte (default)
> > > ------------------------------------------------------------
> > > [  3] local 172.17.1.2 port 59082 connected with 198.10.10.2 port 5001
> > > [ ID] Interval       Transfer     Bandwidth
> > > [  3]  0.0- 1.0 sec   679 MBytes  5.69 Gbits/sec
> > > [  3]  1.0- 2.0 sec   662 MBytes  5.55 Gbits/sec
> > > [  3]  0.0- 2.0 sec  1.31 GBytes  5.61 Gbits/sec
> > >
> > > I'm still investigating.
> > > On Tue, Jul 17, 2018 at 12:36 PM Alberto Leiva <ydahhrk at gmail.com>
> wrote:
> > > >
> > > > Muhammad:
> > > >
> > > > I might have successfully reproduced the problem. Can you please
> > > > confirm if these are the kinds of numbers that you're seeing?
> > > >
> > > > ------------------------------------------------------------
> > > > Client connecting to 198.10.10.2, TCP port 5001 Binding to local
> > > > address 172.17.1.2 TCP window size: 85.0 KByte (default)
> > > > ------------------------------------------------------------
> > > > [  3] local 172.17.1.2 port 5001 connected with 198.10.10.2 port 5001
> > > > [ ID] Interval       Transfer     Bandwidth
> > > > [  3]  0.0- 1.0 sec   384 KBytes  3.15 Mbits/sec
> > > > [  3]  1.0- 2.0 sec   256 KBytes  2.10 Mbits/sec
> > > > [  3]  2.0- 3.0 sec   128 KBytes  1.05 Mbits/sec
> > > > [  3]  3.0- 4.0 sec   128 KBytes  1.05 Mbits/sec
> > > > [  3]  4.0- 5.0 sec   256 KBytes  2.10 Mbits/sec
> > > > [  3]  5.0- 6.0 sec   128 KBytes  1.05 Mbits/sec
> > > > [  3]  6.0- 7.0 sec   256 KBytes  2.10 Mbits/sec
> > > > [  3]  7.0- 8.0 sec   128 KBytes  1.05 Mbits/sec
> > > > [  3]  8.0- 9.0 sec   256 KBytes  2.10 Mbits/sec
> > > > [  3]  9.0-10.0 sec   128 KBytes  1.05 Mbits/sec
> > > > [  3] 10.0-11.0 sec   256 KBytes  2.10 Mbits/sec
> > > > [  3] 11.0-12.0 sec   128 KBytes  1.05 Mbits/sec
> > > > [  3] 12.0-13.0 sec   256 KBytes  2.10 Mbits/sec
> > > > [  3] 13.0-14.0 sec   128 KBytes  1.05 Mbits/sec
> > > > [  3] 14.0-15.0 sec   256 KBytes  2.10 Mbits/sec
> > > > [  3] 15.0-16.0 sec   128 KBytes  1.05 Mbits/sec
> > > > [  3] 16.0-17.0 sec   128 KBytes  1.05 Mbits/sec
> > > > [  3] 17.0-18.0 sec   256 KBytes  2.10 Mbits/sec
> > > > [  3] 18.0-19.0 sec   128 KBytes  1.05 Mbits/sec
> > > > [  3] 19.0-20.0 sec   256 KBytes  2.10 Mbits/sec
> > > > [  3] 20.0-21.0 sec   256 KBytes  2.10 Mbits/sec
> > > > [  3] 21.0-22.0 sec   128 KBytes  1.05 Mbits/sec
> > > > [  3] 22.0-23.0 sec   256 KBytes  2.10 Mbits/sec
> > > > [  3] 23.0-24.0 sec   128 KBytes  1.05 Mbits/sec
> > > > [  3] 24.0-25.0 sec   128 KBytes  1.05 Mbits/sec
> > > > [  3] 25.0-26.0 sec   256 KBytes  2.10 Mbits/sec
> > > > [  3] 26.0-27.0 sec   256 KBytes  2.10 Mbits/sec
> > > > [  3] 27.0-28.0 sec   384 KBytes  3.15 Mbits/sec
> > > > [  3] 28.0-29.0 sec   384 KBytes  3.15 Mbits/sec
> > > > [  3] 29.0-30.0 sec   256 KBytes  2.10 Mbits/sec
> > > > [  3]  0.0-30.3 sec  6.38 MBytes  1.76 Mbits/sec On Mon, Jul 16,
> > > > 2018 at 3:00 PM <ali at acreto.io> wrote:
> > > > >
> > > > > Hi Alberto and Tore,
> > > > >
> > > > > Thanks for your feedback.
> > > > >
> > > > > I've done removed the modules using following
> > > > >
> > > > > #rmmod jool
> > > > > #rmmod jool_siit
> > > > >
> > > > > Re inserted only jool_siit module using following (In global
> > > > > namespace)
> > > > >
> > > > > #modprobe jool_siit no_instance
> > > > >
> > > > > In Jool name space I did the following
> > > > >
> > > > > ip netns exec nsJool  jool_siit --instance --add
> > > > >
> > > > > ip netns exec nsJool  jool_siit --eamt --add 172.17.1.2
> > > > > 2001:db8::c60a:a03 ip netns exec nsJool  jool_siit --eamt --add
> > > > > 198.10.10.2 2001:db8::c60a:a02
> > > > >
> > > > > Note: Ping and connectivity bw IPv4 and IPv6 host works fine.
> > > > >
> > > > > Please find attached the "ethtool -k " output for interface. All
> the veth pairs have similar ethtool configurations.
> > > > >
> > > > > Also find attached traffic capture on IPv6 interface of nsJool.
> I've observed a lot of TCP out of order and retransmissions.
> > > > >
> > > > > Can you take a look at the configurations and provide your
> feedback on why could be there TCP out of order and retransmissions.
> > > > >
> > > > > Appreciate your help
> > > > >
> > > > > Thanks
> > > > > Muhammad Ali
> > > > >
> > > > > -----Original Message-----
> > > > > From: Alberto Leiva <ydahhrk at gmail.com>
> > > > > Sent: Friday, July 13, 2018 12:09 AM
> > > > > To: tore at fud.no
> > > > > Cc: ali at acreto.io; jool-list at nic.mx
> > > > > Subject: Re: [Jool-list] Jool performance help
> > > > >
> > > > > Thanks, Tore!
> > > > >
> > > > > I would like to add the following:
> > > > >
> > > > > > $ modprobe jool
> > > > > > $ modprobe jool_siit
> > > > >
> > > > > Are you sure that this is what you want?
> > > > >
> > > > > I'm not sure why you would want to insert both modules in the same
> namespace. One is a SIIT and the other one is a NAT64. Particularly if
> you're performance-testing, I'd normally expect you to test one *or* the
> other.
> > > > > On Thu, Jul 12, 2018 at 2:09 AM Tore Anderson <tore at fud.no> wrote:
> > > > > >
> > > > > > * ali at acreto.io
> > > > > >
> > > > > > > I’ve installed Jool kernel modules and userspace application
> on Ubuntu. jool_siit is running in a network namespace and I’m using veth
> pairs for network I/O. Please find attached details of my test environment.
> > > > > > >
> > > > > > > However, while running the TCP throughput test, I was able to
> achieve only 6Mbps of throughput. I’ve tested it by doing both *GRO on and
> off* on all the relevant interfaces with no performance improvements.
> > > > > > >
> > > > > > > We are evaluating Jool for carrier grade NAT64 in our network
> infrastructure.
> > > > > > >
> > > > > > > I was wondering if you can help me to improve performance
> results. Is there any tweaks or workarounds to overcome the performance
> limitation by Jool.
> > > > > >
> > > > > > Hi Muhammad,
> > > > > >
> > > > > > First off, you're definitively not hitting the performance
> > > > > > limit of Jool - it easily scales to multiple Gb/s of
> > > > > > throughput. There must be something else that is causing your
> issues.
> > > > > >
> > > > > > Even though you said you turned GRO off, my suspicion would be
> > > > > > something with packet sizes. Are there other offload settings
> > > > > > you can turn off? MTU settings on all the interfaces are all
> okay?
> > > > > >
> > > > > > Also, check with tcpdump on all the relevant interfaces to see
> > > > > > if the test traffic is causing lots of ICMP Frag Needed/Packet
> Too Big errors.
> > > > > >
> > > > > > Tore
> > > > > > _______________________________________________
> > > > > > Jool-list mailing list
> > > > > > Jool-list at nic.mx
> > > > > > https://mail-lists.nic.mx/listas/listinfo/jool-list
> > >
> > >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail-lists.nic.mx/pipermail/jool-list/attachments/20180723/feb40041/attachment-0001.html>


More information about the Jool-list mailing list