Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PFRING ZC Not Working with ZMAP #694

Closed
stanley111111 opened this issue Jun 27, 2022 · 3 comments · Fixed by #875
Closed

PFRING ZC Not Working with ZMAP #694

stanley111111 opened this issue Jun 27, 2022 · 3 comments · Fixed by #875
Milestone

Comments

@stanley111111
Copy link

ntop/PF_RING#818

Any ideas why PFRING ZC doesn't seem to work with ZMAP? Anyone had any luck with the 10gbe driver? :)

@dadrian dadrian added this to the ZMap 4.0 milestone Jul 1, 2022
@dadrian
Copy link
Member

dadrian commented Jul 1, 2022

The PF_RING code is almost a decade old at this point, and I no longer have access to a test setup for it. I assume we need code updates for it. ZMap implementation requires recompiling from source and setting a custom build flag, so make sure you're testing with that.

@dadrian dadrian mentioned this issue Aug 2, 2022
@zakird zakird modified the milestones: ZMap 4.0, ZMap 4.1 Sep 11, 2023
@davideandres95
Copy link

Hi, I am investigating why zmap is not sending packets when using PF_RING, but it reports that it has. Since sending with the zc: mode would prevent me from capturing outgoing packets with tcpdump, I am just trying to send without it but compiling with the corresponding flags.

I am able to verify that I should see packets on the tcpdump, i.e., I see sent packets with the zsend example application from PF_RING without the zc: prefix.
/tmp/PF_RING/userland/examples_zc$ sudo ./zsend -f ping_google_dns.pcap -i enp2s0f0

$ sudo tcpdump -i enp2s0f0 'icmp and host 8.8.8.8' -c 10 -n
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp2s0f0, link-type EN10MB (Ethernet), capture size 262144 bytes
14:48:44.081065 IP 80.81.195.138 > 8.8.8.8: ICMP echo request, id 4715, seq 2, length 64
14:48:44.081080 IP 80.81.195.138 > 8.8.8.8: ICMP echo request, id 4715, seq 3, length 64
14:48:44.081086 IP 80.81.195.138 > 8.8.8.8: ICMP echo request, id 4715, seq 4, length 64
14:48:44.081090 IP 80.81.195.138 > 8.8.8.8: ICMP echo request, id 4715, seq 5, length 64
14:48:44.081095 IP 80.81.195.138 > 8.8.8.8: ICMP echo request, id 4715, seq 6, length 64
14:48:44.081099 IP 80.81.195.138 > 8.8.8.8: ICMP echo request, id 4715, seq 7, length 64
14:48:44.081104 IP 80.81.195.138 > 8.8.8.8: ICMP echo request, id 4715, seq 8, length 64
14:48:44.081108 IP 80.81.195.138 > 8.8.8.8: ICMP echo request, id 4715, seq 9, length 64
14:48:44.081113 IP 80.81.195.138 > 8.8.8.8: ICMP echo request, id 4715, seq 10, length 64
14:48:44.081117 IP 80.81.195.138 > 8.8.8.8: ICMP echo request, id 4715, seq 11, length 64
10 packets captured
12 packets received by filter
0 packets dropped by kernel

However, when running zmap:

zmap -M icmp_echo_time -O json --output-file=./res.json 8.8.8.8 --source-ip=91.214.252.99 --interface=enp2s0f0 --gateway-mac=88:a2:5e:10:17:c5 -f sent_timestamp_ts,sent_timestamp_us,timestamp_ts,timestamp_us,recv_timestamp_ts,recv_timestamp_us --verbosity 4
Oct 11 14:50:42.607 [DEBUG] zmap: zmap main thread started
Oct 11 14:50:42.607 [DEBUG] zmap: syslog support enabled
Oct 11 14:50:42.607 [DEBUG] zmap: requested ouput-module: json
Oct 11 14:50:42.607 [DEBUG] fieldset: probe module does not supply application success field.
Oct 11 14:50:42.607 [INFO] dedup: Response deduplication method is full
Oct 11 14:50:42.607 [DEBUG] zmap: requested output field (0): sent_timestamp_ts
Oct 11 14:50:42.607 [DEBUG] zmap: requested output field (1): sent_timestamp_us
Oct 11 14:50:42.607 [DEBUG] zmap: requested output field (2): timestamp_ts
Oct 11 14:50:42.607 [DEBUG] zmap: requested output field (3): timestamp_us
Oct 11 14:50:42.607 [DEBUG] zmap: requested output field (4): recv_timestamp_ts
Oct 11 14:50:42.607 [DEBUG] zmap: requested output field (5): recv_timestamp_us
Oct 11 14:50:42.607 [INFO] filter: No output filter provided. ZMap will output all results, including duplicate and non-successful responses (e.g., RST and ICMP packets). If you want a filter similar to ZMap's default behavior, you can set an output filter similar to the following: --output-filter="success=1 && repeat=0".
Oct 11 14:50:42.607 [DEBUG] SEND: ipaddress: 91.214.252.99
Oct 11 14:50:42.607 [DEBUG] constraint: blocklisting 0.0.0.0/0
Oct 11 14:50:42.607 [DEBUG] constraint: Painting value 1
Oct 11 14:50:42.610 [DEBUG] constraint: 0 IPs in radix array, 1 IPs in tree
Oct 11 14:50:42.610 [DEBUG] constraint: 1 addresses (0% of address space) can be scanned
Oct 11 14:50:42.772 [DEBUG] send: gateway MAC address 88:a2:5e:10:17:c5
Oct 11 14:50:42.772 [DEBUG] zmap: output module: json
Oct 11 14:50:42.772 [DEBUG] iterator: bits needed for 1 addresses: 0
Oct 11 14:50:42.772 [DEBUG] iterator: bits needed for 1 ports: 0
Oct 11 14:50:42.772 [DEBUG] iterator: minimum elements to iterate over: 1
Oct 11 14:50:42.772 [DEBUG] iterator: max index 1
Oct 11 14:50:42.772 [DEBUG] zmap: Isomorphism: 3
Oct 11 14:50:42.772 [DEBUG] iterator: max targets is 4294967295
Oct 11 14:50:42.772 [DEBUG] send: srcip_first: 91.214.252.99
Oct 11 14:50:42.772 [DEBUG] send: srcip_last: 91.214.252.99
Oct 11 14:50:42.772 [DEBUG] send: will send from 1 address on 28233 source ports
Oct 11 14:50:42.772 [DEBUG] send: rate set to 10000 pkt/s
Oct 11 14:50:42.772 [DEBUG] send: no source MAC provided. automatically detected 3c:fd:fe:a8:f1:f4 as hw interface for enp2s0f0
Oct 11 14:50:42.772 [DEBUG] send: source MAC address 3c:fd:fe:a8:f1:f4
Oct 11 14:50:42.773 [DEBUG] zmap: Pinning receive thread to core 0
Oct 11 14:50:42.773 [DEBUG] recv: capturing responses on enp2s0f0
Oct 11 14:50:42.773 [INFO] recv: duplicate responses will be passed to the output module
Oct 11 14:50:42.773 [INFO] recv: unsuccessful responses will be passed to the output module
Oct 11 14:50:42.773 [DEBUG] zmap: 1 sender threads spawned
Oct 11 14:50:42.773 [DEBUG] zmap: Pinning a send thread to core 2
Oct 11 14:50:42.774 [DEBUG] zmap: Pinning monitor thread to core 3
Oct 11 14:50:42.774 [DEBUG] send: send thread started
Oct 11 14:50:42.774 [DEBUG] send: source MAC address 3c:fd:fe:a8:f1:f4
Oct 11 14:50:42.774 [DEBUG] send: send thread 0 finished, shard depleted
Oct 11 14:50:42.774 [DEBUG] send: thread 0 cleanly finished
Oct 11 14:50:42.774 [DEBUG] zmap: senders finished
 0:00 0%; send: 1 done (992 p/s avg); recv: 0 0 p/s (0 p/s avg); drops: 0 p/s (0 p/s avg); hitrate: 0.00%
Oct 11 14:50:42.774 [DEBUG] zmap: send queue flushed
 0:01 13%; send: 1 done (992 p/s avg); recv: 0 0 p/s (0 p/s avg); drops: 0 p/s (0 p/s avg); hitrate: 0.00%
 0:02 25%; send: 1 done (992 p/s avg); recv: 0 0 p/s (0 p/s avg); drops: 0 p/s (0 p/s avg); hitrate: 0.00%
 0:03 38%; send: 1 done (992 p/s avg); recv: 0 0 p/s (0 p/s avg); drops: 0 p/s (0 p/s avg); hitrate: 0.00%
 0:04 50%; send: 1 done (992 p/s avg); recv: 0 0 p/s (0 p/s avg); drops: 0 p/s (0 p/s avg); hitrate: 0.00%
 0:05 63% (3s left); send: 1 done (992 p/s avg); recv: 0 0 p/s (0 p/s avg); drops: 0 p/s (0 p/s avg); hitrate: 0.00%
 0:06 75% (2s left); send: 1 done (992 p/s avg); recv: 0 0 p/s (0 p/s avg); drops: 0 p/s (0 p/s avg); hitrate: 0.00%
 0:07 88% (1s left); send: 1 done (992 p/s avg); recv: 0 0 p/s (0 p/s avg); drops: 0 p/s (0 p/s avg); hitrate: 0.00%
Oct 11 14:50:50.775 [DEBUG] recv: thread finished
Oct 11 14:50:50.878 [INFO] zmap: completed

It claims that the packet has been sent, but I can't see anythin on my tcpdump.

Do you have any idea why this could be?

@davideandres95
Copy link

I have found out several things in relation to this issue, one of them being a bug.

In first place, when support for cooked mode was introduced (#504 ), PF_RING was left behind. The culprit being

struct ip *ip_hdr = (struct ip *)&bytes[zconf.data_link_size];
as the ethernet header is expected to be there.

A possible solution I have tested would be to set zconf.data_link_size inside recv-pfring.c:recv_init:

void recv_init()
{
	// Get the socket and packet handle
	pf_recv = zconf.pf.recv;
	pf_buffer = pfring_zc_get_packet_handle(zconf.pf.cluster);
	if (pf_buffer == NULL) {
		log_fatal("recv", "Could not get packet handle: %s",
			  strerror(errno));
	}
	zconf.data_link_size = sizeof(struct ether_header);
}

Without it, headers are not parsed correctly and no packets passes validation.

In second place, packets are never actively flushed, and queues are only synced upon termination. While the sync should flush any small number of packets unsent when the sender finishes, I have tested that if a single probe to a single target is scheduled, no packet will be sent out. Increasing the number of probes above 512 flushes them automatically, so for large scans I believe packets will be sent. But any number of remaining packets (<512) at the end will be possibly not sent out.

After looking into the PF_RING_ZC examples, I believe that queues should be synced within the timing-delay loop so that no packets are kept buffered while waiting, but I am a beginner in this area and I leave it up to you how this should be handled. As a workaround, I am flushing every single packet individually, which will degrade performance but will improve the latency observed by my probes, which is for my use-case relevant (latency measurements).

@zakird zakird modified the milestones: ZMap 4.1, ZMap 4.2 Mar 2, 2024
@droe droe mentioned this issue May 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants
@zakird @dadrian @davideandres95 @stanley111111 and others