Limiting tcp reset response from 422 to 192 packets/sec
It looks like your server’s kernel is working overtime to protect itself. Those log entries indicate that your system is hitting a TCP RST (Reset) rate limit.
In plain English: your server is being flooded with connection attempts that it doesn't want, and it's trying to say "No" (via a Reset packet) so fast that the kernel is stepping in to throttle those responses to save CPU and bandwidth.
What’s Likely Happening?
This behavior is usually triggered by one of three things:
- A Port Scan: Someone is probing your server to see which ports are open. Your server sends a RST for every closed port they hit.
- A Distributed Denial of Service (DDoS): Specifically a TCP SYN Flood. Attackers are sending spoofed connection requests, and your server is exhausted trying to reject them.
- A Misconfigured App: Less likely, but a local or remote service could be stuck in a loop trying to connect to a port that isn't listening.
Should You Be Worried?
The kernel doing this is actually a good thing—it’s a built-in defense mechanism (icmp_ratelimit or similar TCP protections) preventing your server from becoming a "Reset reflector" that consumes all its own outgoing bandwidth.
However, since this is happening every second in your logs, the traffic is persistent enough to warrant a look.
Immediate Steps to Take
1. Identify the Source
Run this command to see which IP addresses are currently hitting your server the hardest:
Bash
netstat -ntu | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -n
2. Check for SYN Flooding
If you see a massive number of connections in SYN_RECV state, you are under a SYN flood attack:
Bash
netstat -nap | grep SYN_RECV | wc -l
3. Check What Ports are Targeted
Use tcpdump for a few seconds to see exactly what the incoming traffic looks like:
Bash
tcpdump -c 100 -i eth0 tcp[tcpflags] \& tcp-rst != 0
(Replace eth0 with your actual network interface name.)
How to Quiet the Logs
If you've confirmed this is just "internet noise" (standard background scanning) and it's not affecting performance, you can tune the kernel limits, though I'd recommend investigating the IPs first.
- To see current limits:
sysctl net.ipv4.icmp_ratelimit - To block a specific offending IP:
iptables -A INPUT -s [OFFENDING_IP] -j DROP
Since you are using PF (Packet Filter)—common on BSD systems like FreeBSD, OpenBSD, or macOS—we can handle this much more efficiently than the kernel’s default rate-limiting.
PF is excellent because it can "track" connections and automatically ban IPs that behave aggressively, preventing those TCP RST logs from ever appearing.
1. The "Quick Fix" Rule
To stop the flood of logs and save your CPU, you should add a rule to your /etc/pf.conf that limits how many connections a single host can attempt.
Add this to your filter rules (usually near your pass/block section):
PF
# Define a table for "abusive" hosts
table <bruteforce> persist
# Block anything in the bruteforce table
block in quick from <bruteforce>
# Pass traffic but monitor for high-frequency connection attempts
# This example allows 100 connections per 5 seconds
pass in on egress proto tcp from any to any port { 80, 443, 22 } \
flags S/SA keep state \
(max-src-conn-rate 100/5, overload <bruteforce> flush global)
What this does:
max-src-conn-rate 100/5: If an IP tries to connect more than 100 times in 5 seconds, it triggers the rule.overload <bruteforce>: The offending IP is instantly added to the<bruteforce>table.flush global: This kills all existing connections from that IP immediately.
2. Check the "Attacker" IPs
Before you apply the rules, you can see which IPs are currently causing the state table to bloat. Run this in your terminal:
Bash
# Show current states and sort by frequency
pfctl -ss | awk '{print $3}' | grep -v '^[[:space:]]*$' | sort | uniq -c | sort -nr | head -n 10
3. Applying the Changes
Once you have edited your /etc/pf.conf, you need to check the syntax and reload the configuration:
Bash
# Check syntax first (very important so you don't lock yourself out!)
pfctl -nf /etc/pf.conf
# If no errors, reload
pfctl -f /etc/pf.conf
4. Managing the "Banned" List
Since the <bruteforce> table persists in memory, you might want to see who got caught or clear them later:
- View banned IPs:
pfctl -t bruteforce -T show - Manually remove an IP:
pfctl -t bruteforce -T delete 1.2.3.4 - Clear the whole table:
pfctl -t bruteforce -T flush
A quick heads-up: If you are seeing these logs on a specific port (like 22 for SSH or 80 for HTTP), it’s often a sign of a "Low and Slow" attack or a basic botnet scan.
Member discussion