Testing pfSense as an IPv6 Firewall - A Weird Case (Testing IPv6 Security Devices, Part 2)

pfSense is a clone of m0n0wall and, to the best of my knowledge, the eldest open source IPv6 firewall which is still maintained by its developers. Therefore, it should be expected that its maturity level should be good enough for normal usage.


The latest pfSense version currently available is 2.3.3, based on FreeBSD 10.3-RELEASE-p16.

pfSense provides the same capabilities with OPNsense regarding the IPv6 configuration of its interfaces, the deployment of DHCPv6 server, the sending of Router Advertisements and their configuration, etc. So, the only difference from an IPv6 configuration perspective between pfSense and OPNsense is the capability of filtering IPv6 Extension headers, which, nevertheless, does not seem to really work.

Testing IPv6 Filtering Capabilities of pfSense – A Weird Behaviour

Despite the lack of a filtering capability for IPv6 Extension headers, I tested pfSense as an IPv6 firewall. I will spare you most of the details, I will focus on a weird situation that I encountered.


First, I easily noticed (e.g. by introducing some deliberate delay between fragments of the same datagram) that pfSense performs deep packet inspection (i.e. it does not forward any fragment, until it receives them all to reassembly and examine the initial pre-fragmentation IPv6 datagram).


Then, during some legitimate fragmentation testing case, I noticed that I was receiving an ICMPv6 Parameter Problem / Erroneous header field encountered message (see first figure), but I did not understand why. As we can see from this figure, the packets that were sent where two fragments of a TCP (SYN) header, the first of which was 8 bytes long whilst the second (last) was 12 bytes long. This split is a completely legitimate one according to the procedure described in [RFC 2460].


The two fragments and the response received back are depicted at the first Wireshark output. 

To identify the reason, I had to capture and examine the traffic as received at the end host (the final “target”). This is depicted in the next Wireshark screenshot, which in my opinion explains a lot: It seems that pfSense after examining the IPv6 datagram, it refragments it in two fragments, so as to send them as they were received. However, due to a bug, presumably, the first fragment is 12 bytes long and the second, inevitably, 8 bytes long. However, as [RFC 2460] clearly explains, all but the last fragments should be multiple of octets of bytes long. Given that the offset of the fragments is also denoted in octets of bytes (e.g. offset at a Fragment Extension header equal to 1 denotes an actual offset of 8 bytes long, offset equal to 2 denotes an actual offset of 16 bytes, and so on), the aforementioned split creates inevitably fragmentation overlapping.


In such a case, the final recipient, if implements [RFC 5722], it will simply drop all the fragments of such a datagram and the consequences will “only” be operational ones (i.e. legitimate traffic will be dropped, without a reason). However, in case the final target accepts fragmentation overlapping, and especially if the newest fragments override the previous received, this could have some security implications. Specifically, an attacker could carefully craft a fragmented datagram so as to avoid firewall detection, but to be acceptable from the final target. This would lead to firewall evasion (in an a new post, after this issue is patched, I may explain how this could be achieved).


In the meantime, the mitigation would be: a) to patch all end systems so as to implement [RFC 5722] (and thankfully this is typically the case for the majority of the modern Operating Systems), and b) to do drop the fragmented packets at the perimeters. Of course, in case that you use pfSense, this could be achieved using the command line only since such an option is not available in the Web interface (in a new post I may also explain how do it).


A final thought: Given that pfSense is a web interface for some FreeBSD capabilities, it could be the case that the aforementioned bug is actually a FreeBSD bug of version of version 10.3 p16. Therefore, I have reported the issue to both FreeBSD and pfSense sec teams.



Read More





OPNsense as an IPv6 Firewall (Testing IPv6 Security Devices, Part 1)

As the Cisco Labs measurements show, IPv6 is a protocol that cannot be ignored any more. In some countries, like Belgium, Greece, Germany, the US, etc. the percentage of the users employing IPv6 is about 30% or even to 50%, and, based on the estimations, the increase of IPv6 traffic will continue to grow exponentially. So, it’s time to ensure that our firewall supports IPv6 as well.


While there are several open-source based solutions regarding firewalls, Linux-based or FreeBSD-based ones, this is not also the case when we want IPv6 support as well. Since m0n0wall project has officially ended, the only two options actually left for open-source users seeking for an iPv6 firewall are OPNsense and pfSense (if someone has an additional suggestion, please let me know).


Whilst pfSense supports IPv6 for quite a long time, as a firewall from a security perspective has a significant disadvantage: As of version 2.3.3 Community Edition it does not allow the filtering of IPv6 datagrams based on the used IPv6 Extension Headers. Therefore, if its administrator wants to filter e.g. IPv6 traffic carrying a Hop-by-Hop header, a Destination Options header, etc. (see [RFC 2460] for more details on IPv6 Extension headers), he simply cannot do it. And I do consider the capability of filtering IPv6 Extension headers really important for the reasons demonstrated here and here. In my opinion, this capability should be configurable.Therefore, I decided to give OPNsense a try since it seems to be the only open-source solutions that currently offers IPv6 Extension headers filtering capabilities.


IPv6 Features of OPNsense

At the time of this writing, the latest available version of OPNsense is 17.1.4, based on FreeBSD 11.0.

As far as IPv6 is concerned, OPNsense provides all the options typically required to configure the interfaces of a device for IPv6 usage (e.g. static, using DHCPv6, SLAAC, etc.).

The user has also the capability to configure the IPv6 addresses of the LAN hosts by various means, e.g. by sending Router Advertisements (RAs), by operating a DHCPv6 server, etc. Furthermore, the configuration options provided per case seem to be more than enough. For instance, regarding RAs, it can be configured whether the Managed bit should be set or not, the router priority, the minimum and maximum intervals between RAs, etc. (see figure at the right).

However, given that I was mainly interested in the IPv6 filtering capabilities that OPNSense provides, let’s jump directly to them.









Testing IPv6 Filtering Capabilities of OPNsense

OPNsense at the “Protocol” field provides the options to select for filtering various IPv6 headers, including an IPv6 (encapsulated) header, ICMPv6, as well as some Extension headers like IPv6 Routing header, Fragment Extension header, IPv6 Options header (without clarifying here if it for Destination Options header, Hob-by-hop header, or for both), etc. (see figure at he right). 

Of course, all the above configuration capabilities would be of a low significance if they were not tested. For this reason, some basic testing was performed by using my tool, Chiron. As a target IPv6 address, the LAN IPv6 address of the firewall itself was used (so as to check at the same time, when possible, both the filtering capabilities as well as the OPNsense/FreeBSD compliance regarding IPv6 Extension headers). To this end, a rule was added to allow http traffic over IPv6 to the LAN IPv6 address.

After the above configuration, it was found out that by default the following traffic is allowed:

  • Unfragmented HTTP over IPv6 (as obviously was expected).

  • Hop-by-hop header, Destination Options header, or their combination (only in the correct order).

  • Up to 14(!) Destination Options headers in one unfragmented datagram (certainly this is not a compliant behaviour from FreeBSD).

  • Atomic fragments.

  • Fragmented HTTP over IPv6.

  • Fragmented packets of the aforementioned use cases (e.g. multiple Extension headers in fragmented datagrams).

On the contrary, by default no responses are received at the following use-cases, probably not because of firewall filtering but due to compliance of the host to the corresponding RFCs:

  • Routing header (all types, e.g. Type 0, Type 1, etc. - Type 0 has been deprecated with RFC 5095, the others potentially due to lack of support).

  • More than one Hop-by-hop headers in unfragmented packets (which is clearly forbidden in [RFC 2460]).

Read More





A few thoughts about IPv6 Jumbograms(andhow you can send them over normal MTU links)

As defined in RFC 2675 [Borman, Deering, Hinden, 1999], “a "jumbogram" is an IPv6 packet containing a payload longer than 65,535 octets”. An IPv6 datagram, due to the length of the IPv6 main header (16 bits), can support up to 65535 octets of IPv6 payload [Deering, Hinden, 1998]; this includes any IPv6 extension headers that may follow the IPv6 main header, but not the main header itself. For applications that need datagrams bigger than this (by the way, are you aware of any of this kind?), IPv6 jumbograms come into place. However, as also explained in RFC 2675, “jumbograms are relevant only to IPv6 nodes that may be attached to links with a link MTU greater than 65,575 octets” (65,535 octets of an IPv6 payload plus 40 octets for the IPv6 main header itself).

How IPv6 Jumbograms are Constructed

IPv6 jumbograms are defined as an IPv6 hop-by-hop option, called the “Jumbo Payload” option, that carries a 32-bit length field in order to allow transmission of IPv6 packets with payloads between 65,536 and 4,294,967,295 octets in length [Borman, Deering, Hinden, 1999].


The Payload Length field in the IPv6 header must be set to zero, while the Next Header value is also set to 0 (implying that a Hop-by-Hop header follows). So, an IPv6 Jumbogram would look like:



Now, the Hop-by-Hop header itself will look like:

where “Next header” is an 8-bit field whose value indicates the header that follows (e.g. if the next header is ICMPv6, it should be 58).

0 is the value of the Length field of the Hop-by-Hop header (the length of the Hop-by-Hop Options header in 8-octet units, not including the first 8 octets).

C2 (in hexadecimal) or 194 (in decimal) is the Jumbo Option Type.

4 is the Option Data Length.


Finally, the Jumbo Payload Length is an 32-bit unsigned integer which indicates the length of the IPv6 packet in octets, excluding the IPv6 main header but including the Hop-by-Hop Options header and any other extension headers present. The Jumbo Payload Length must be greater than 65535 (since, otherwise, there is no need to use a Jumbogram).


According to RFC 2675, the Jumbo Payload option must not be used in a packet that carries a Fragment header.

Relevant Work

The following vulnerabilities have been discovered related with the implementation of IPv6 Jumbograms.


CVE-2007-4567: The ipv6_hop_jumbo function in net/ipv6/exthdrs.c in the Linux kernel before 2.6.22 does not properly validate the hop-by-hop IPv6 extended header, which allows remote attackers to cause a denial of service (NULL pointer dereference and kernel panic) via a crafted IPv6 packet.


CVE-2008-0352: The Linux kernel 2.6.20 through allows remote attackers to cause a denial of service (panic) via a certain IPv6 packet, possibly involving the Jumbo Payload hop-by-hop option (jumbogram).


CVE-2010-0006: The ipv6_hop_jumbo function in net/ipv6/exthdrs.c in the Linux kernel before, when network namespaces are enabled, allows remote attackers to cause a denial of service (NULL pointer dereference) via an invalid IPv6 jumbogram, a related issue to CVE-2007-4567.

Testing IPv6 Jumbograms

Of course, the big problem with IPv6 Jumbograms (from a tester's / researcher's perspective) is the fact that it is not that easy to find a link with an MTU suitable for supporting them. So, I was thinking about alternative approaches. One such approach could be to … fragment an IPv6 Jumbograms; I know, this does not make sense in real life (why fragment something whose intended purpose is to avoid the necessity of fragmentation) and secondly, it is “forbidden” by the corresponding RFC (but, who cares about this)? So, I decided to give it a try.


Now, the first approach would be to put the Fragment Extension header after the Hop-by-Hop header (which carries the “Jumbo” payload). In this case, there is a problem with the Payload Length field of the IPv6 main header. Typically, this should be zero for IPv6 Jumbograms, but given that we use fragment(ation), a correct Payload Length should be used to determine the end of the fragment. Nevertheless, this scenario can be tested as follows:


For 65536 bytes of IPv6 datagram (excluding the IPv6 main header), we need:

- 8 bytes for the Hop-by-Hop header

- 8 bytes for the ICMPv6 Echo Request header

- 65536 - 8 – 8 = 65520 bytes of layer 4 payload, that is 8190 octets of bytes.


 Let's fragment it in 56 fragments. Assuming that our target listens to the IPv6 address 2001:db8:1:1::2 m we can use the following Chiron command:


./chiron_scanner.py vboxnet0 -sn -d 2001:db8:1:1::2 -l4_data `python -c 'print "AABBCCDD"*8190'` -luE 0'(options=Jumbo;jumboplen=65536)' -nf 56 -plength 0


It should be noted that the reassembled packet is recognised properly as an IPv6 jumbogram, and specifically, as an ICMPv6 Echo Request of 65528 bytes (65526 bytes are reached after adding the length of the Hop-by-Hop header).


The other approach is to Fragment the Hob-by-Hop header (that is, to put the Hop-by-Hop header in the fragmentable part of the initial IPv6 datagram). Of course, RFC 2460 forbids this (“ The Hop-by-Hop Options header, when present, must immediately follow the IPv6 header”), but you never know. Let's try to find out what happens in the real world, using the following Chiron command.


./chiron_scanner.py vboxnet0 -sn -d 2001:db8:1:1::2 -lfE 0 -l4_data `python -c 'print "AABBCCDD" *1'` -nf 2


The above command adds a Hop-by-Hop header in the fragmentable part of the IPv6 datagram and then, it fragments it to two fragments (for explanation of the switches, please check my previous blog post).


The tested OS were (once more):

  • Windows 10 Home, Version 1511, OS Build 10586.36, x64

  • Centos 7, kernel 3.10.0-327.3.1 x86_64

  • Fedora 23, kernel 4.2.8-300 x86_64

  • FreeBSD 10.2-RELEASE-p7, amd64

  • OpenBSD 5.8, GENERIC#1170 amd64

After executing it, I found out that most of the tested OS respond with an ' ICMPv6 Parameter problem, unrecognized Next Header type encountered' packet, while OpenBSD responds with an Echo Reply (OK, not a vulnerability on its own, but such an RFC noncompliance is still a bit disappointing for an OS which emphasises, among else, on standardization, correctness, and proactive security.


How can we reproduce it? Using the previous approach, but by moving the Hop-by-Hop header to the fragmentable part of the IPv6 datagram. This time the Chiron command will like:


./chiron_scanner.py vboxnet0 -sn -d 2001:db8:1:1::2 -l4_data `python -c 'print "AABBCCDD"*8190'` -luE 0'(options=Jumbo;jumboplen=65536)' -nf 56 -plength


Again, Wireshark recognises the reassembled IPv6 Jumbogram ICMPv6 Echo Request properly!


Happy Jumbo-testing :-)


PS: The Chiron version that will support IPv6 Jumbograms will be released during the IPv6 Security Summit of Troopers 16.


Borman D., Deering S. & Hinden R. (1999). IETF RFC 2675 “IPv6 Jumbograms”, August 1999.


Deering S. & Hinden R. (1998). IETF RFC 2460 “Internet Protocol, Version 6 (IPv6) Specification”, December 1998.






Unusual IPv6 Fragmentation and Operating Systems Responses

It has been already almost three years than I last checked and presented (at BlackHat Asia 2012 and Troopers 13) the Operating Systems (OS) behaviour in case of non-compliant (according to RFC 2460) usage of IPv6 Extension headers. One of the cases that examined (briefly) at that time was a kind of “nested” IPv6 fragmentation. In this blog post I will present my latest results regarding this topic by trying to also extend the previous work with some more potentially interesting cases.


Starting from the very basic, fragmentation fields in IPv6 have been removed from the IP main header (where they used to be in IPv4); instead, a specific IPv6 Extension header, the so called Fragment Extension header, is used for fragmentation purposes in IPv6. This header, incorporates, among else, the Offset field, the Fragment Identification Number field and the “More Fragment to Follow” (M) bit. Normally, there should be only one Fragment Extension header per IPv6 fragment (why a legitimate sender should use more than one?), but RFC 2460 is not that strict with this: “Each extension header should occur at most once, except for the Destination Options header which should occur at most twice. In any case though, I am not aware and I cannot think any kind of a legitimate case that an OS could incorporate more than one IPv6 Fragment header in one fragment.


Lab set-up


In my last experiments I tested the following Operating Systems:

  • Windows 10 Home, Version 1511, OS Build 10586.36, x64

  • Centos 7, kernel 3.10.0-327.3.1 x86_64

  • Fedora 23, kernel 4.2.8-300 x86_64

  • FreeBSD 10.2-RELEASE-p7, amd64

  • OpenBSD 5.8, GENERIC#1170 amd64

 The tool used for the testing was, what else, Chiron.


As a layer 4 protocol, ICMPv6 Echo Request was used (since it is the easiest way to trigger a response).


Let's start from defining a baseline by examining which OS accept Atomic fragments. These are fully compliant cases (by the way, there is a current, ongoing effort to deprecate them, but that is another story). Atomic fragments are the ones whose offset is equal to 0, and the M bit is also equal to 0, implying that this is the first and at the same time the last fragment. They are used in the very special cases (which, however, our out of the scope of our discussion). Such a use case can be reproduced by using the following Chiron command:

./chiron_scanner.py vboxnet0 -d 2001:db8:1:1::1,2001:db8:1:1::2 -lfE 44 -sn



vboxnet0 is the network interface to use.


2001:db8:1:1::1 and 2001:db8:1:1::2 are (some of) our targets (obviously using a comma-separated list, more targets can also be tested using a single command, but they are not written here for brevity reasons).


-lfE 44 adds an “Atomic” Fragment Extension header (44 is the next header value of a Fragment Extension header), and


-sn is used for sending ICMPv6 Echo Request as a layer 4 header (nmap notation).


As expected (since this is a fully legitimate case), all OS respond to such fragments.


In this blogpost for convenience reasons I use the term “Atomic Fragment Extemsion header” an IPv6 Fragment Extension header which is used in Atomic fragments, that is one which has Offset=0 and M=0.


Tested Case 1: Two “Atomic” Fragment Extension headers in a single IPv6 datagram


First, let's see what happens when using two “Atomic” IPv6 Fragment Extension headers in a row in the same IPv6 datagram (figure 1).



Figure 1: Two “Atomic” Fragment Extension headers in a single IPv6 datagram (the length of the headers in the figure are not in scale)

The above case can be reproduced by using the following Chiron command:


 ./chiron_scanner.py vboxnet0 -d 2001:db8:1:1::1,2001:db8:1:1::2 -lfE 44,44 -sn


The results showed that it is all OS except from OpenBSD that respond to them. While typically such a case “could” be typically considered RFC compliant )according to the aforementioned RFC 2460 quote”, there is definitely no real, legitimate usage of such a case.


Test Case 2: Fragmented Atomic Fragments


In this case, a ...fragmented (in two fragments) Atomic fragment is sent. These fragments looks like:



Figure 2a: A ...fragmented Atomic fragment (the length of headers in the figure are not in scale)



The reassembled IPv6 datagram will look like:



Figure 2b: A reassembled ...fragmented Atomic fragment (the length of headers in the figure are not in scale)


 This case can be reproduced with the following Chiron command:


 ./chiron_scanner.py vboxnet0 -d 2001:db8:1:1::1,2001:db8:1:1::2 -lfE 44 -sn -nf 2


In the above command, while we have already added an “Atomic” Fragment Extension header using the -lfE 44 switch, we further fragment it using the -nf 2 switch.


No need to say that such a case should not be accepted in the real world :-)


The results showed that Fedora, Centos and Windows 10 accept these fragmented Atomic fragments.


Test Case 3: “Atomic Fragmentation”


In this case, an Atomic Fragment Extension header is added in front of each fragment of an otherwise normally fragmented IPv6 datagram. Specifically, let's assume that we have the following initial IPv6 datagram:



Figure 3a: A simple IPv6 datagram that incorporates one IPv6 Extension header (the length of headers in the figure are not in scale)


In the above datagram, a Destination Options Extension header was added to send the layer 4 header (ICMPv6 Echo Request in our case) to the second fragment (when fragmented, of course). So, when fragmented, the above datagram will look like:



Figure 3b: A fragmented IPv6 datagram that incorporates one IPv6 Extension header (the length of headers in the figure are not in scale)


 Now, let's add an Atomic Fragment Extension header just after the IPv6 main header. So, now our fragments are turn into the following: 



Figure 3c: “Atomic Fragmentation” (the length of headers in the figure are not in scale)


 The above case can be reproduced using the following Chiron command:


 ./chiron_scanner.py vboxnet0 -d 2001:db8:1:1::1,2001:db8:1:1::2 -luE 44 -lfE 60 -sn -nf 2


In the above command:


-lfE 60 adds a Destination Options Extension header in the fragmentable part (60 is the next header value of a Destination Options header),


-luE 44 adds an “Atomic” Fragment Extension header in the unfragmentable part (44 is the next header value of a Fragment Extension header), and


-nf 2 splits it in two fragments.


Which OS responds to such fragments? Just Windows 10!


 Potential security implications?


The typical ones. Remote OS fingerprinting, potential IDPS evasion, fuzzing of the targeted OS.


Can we make it even more complicated? Possibly. Using IPv6 Extension headers, sky is the limit ;-)


In a next blogpost I will discuss IPv6 Jumbodatagrams and how these can be sent (in very specific cases) over normal (e.g. Ethernet) MTU links.






How To  Block Incoming IPv6 Fragments in Latest Red-Hat Releases

IP fragmentation is the source of many security issues, even from the IPv4 era; the case could not be different in IPv6 (see for example here and here). Of course, IPv6 Extension Headers can make the situation much worse. Hence, it seems that we have quite a few good reasons to block fragmentation at our firewalls, either at our network perimeter or even at host firewalls.

While writing an IPv6 Hardening Guide for Linux Servers, I found that the usual ip6tables rules to block fragmentation did not work in latest Red-Hat Enterprise OS and clones (e.g. Centos). Specifically, I found out that at Red Hat Enterprise version 6.6 (also affects version 7), while applying ip6tables to block incoming IPv6 fragments, IPv6 fragments are not blocked, as it should be the case, either for atomic or non-atomic ones. You can easily confirm it by:

a. Issuing the command: ip6tables -I INPUT 1 -i eth1 -m ipv6header --header frag --soft -j DROP

b. Sending ICMPv6 Echo Requests with size bigger than the MTU size, e.g.: ping6 -s 4000 <IPv6 address of the target machine>.

Note: This issue affects only the blocking of IPv6 Fragment header, not the rest of the IPv6 Extension headers.

Similar results (no blocking of IPv6 fragments) are obtained for the following commands:

ip6tables -I INPUT 1 -m frag --fragfirst -j DROP #(it should drop the first fragment)

ip6tables -I INPUT 1 -m frag --fraglast -j DROP #(it should drop the last fragment)

ip6tables -I INPUT 1 -m frag --fragmore -j DROP #(it should drop all but the last fragment)

I immediately reported it as a bug at Red Hat Bugzilla (Bug 1170144), assuming that this would be a kind of a bug that it would be fixed easily and quickly. However, the developers that responded mentioned that actually this was a consequence of a deliberate change of fragmentation handling in these systems. Specifically, they mentioned that this was probably because after bug id 1011214 (kernel-2.6.32-437.el6) netfilter started processing the reassembled packet instead of the fragments, like IPv4 does. For Red Hat 6.5 this was fixed on kernel-2.6.32-431.5.1.el6 and thus, any kernel before that would work differently. Now netfilter works like it works for IPv4. That is, the only way to see such fragments again through netfilter chains is by unloading nf_defrag_ipv6 module. They also mentioned that the change of behavior is because now only the final, reassembled, packet is pushed through netfilter chains - and not the fragments.

There is also a positive effect though. Due to this change, atomic fragments are not forward anymore, meaning that they will get reassembled and forwarded as a “clean” (without IPv6 Fragment header) packet, which is obviously good, at least from a security perspective.

However, this is not true for genuine fragments, since they will get re-fragmented on output in case you're forwarding them.

By unloading that module, you can still receive such packets locally but you're not able to use conntrack on that system.

After confirming that this was the case, it was obvious that a workaround should be found. A first few comments though.

First, the fact that "...genuine fragments, they will get re-fragmented on output, in case you're forwarding" seems to somewhat break one of the basic IPv6 principles (RFC 2460, section 4.5) which defines that: "unlike IPv4, fragmentation in IPv6 is performed only by source nodes, not by routers along a packet's delivery path". But this is not the most important consequence, although it introduces a delay (due to reassembly-refragmentation process).

As a security engineer I would like to have the ability to block fragments if I wanted to. Fragmentation in IPv6 when combined with Extension headers can have some really bad security consequences, as discussed briefly at the beginning of this article; so, I would like to be able to block them, if, for instance, this is the policy of my organisation, company, etc.

Now, if we use the aforementioned workaround (unloading the suggested kernel module), we will lose the capability of connection tracking, which is not good either (if not worse).

After a (short) discussion and a kinf of brainstorming, Florian Westphal from Red Hat suggested that IPv6 fragmented traffic could be filtered using nftables, because with them you can define arbitrary tables and priorities; so, its easy to hook packets before they hit defrag. However, at the time of this writing, March of 2015, nftables were not supported even on RHEL 7.1. Anyway, Marcelo Ricardo Leitner suggested a rule to block IPv6 fragments using nftables which would look like as follows:

table ip6 filter {

   chain preroute500 {

      type filter hook prerouting priority -500; policy accept;

      ip6 nexthdr ipv6-frag counter packets 2 bytes 2104



As Marcelo explained, the line "ip6 nexthdr ipv6-frag counter" will match whenever an IPv6 packet is seen with a nexthdr field being set to ipv6-frag, which is only used for fragments (even atomic ones); other traffic will pass this check. Of course, for dropping, you have to add a 'drop' in this rule.

Is the aforementioned approach effective on blocking IPv6 fragments? Frankly, I have not tested it yet, but certainly, taking into account the latest kernel changes it seems to be the only way to do go. Hopefully, I will have the chance to test it soon, so, stay tuned :)