Opened DNS on customer’s modem wrack havoc

Opened DNS on customer’s modem wrack havoc

Title

Proving public IP to customers is related to a serious risk. If a DNS port is opened to internet, a reflection type of DDoS starts flowing through an ISP, eventually.

Situation

It’s a rainy Sunday evening, and most households are watching TV or Netflix. That’s when internet connection starts to drop from time to, and eventually, all households lose it completely. 

The support line desk at the regional ISP was busy that evening, hundreds of customers were angry enough to call the operator and demand that the outage be resolved immediately.

Challange

ISP’s network administrator left the dinner table at home and went working to resolve the situation.

However, after careful analysis of SNMP telemetry in Zabbix, he wasn’t able to find out the root cause of an issue.

The operator thought that if this happens next weekend, we are going to lose customers for sure.

What to do now?

    Solution

    Network administrator looked into netflow data – traffic telemetry (link).

    • ISP had netflow export in place on all CORE routers
    • Netflow data streams were continually sent to the central collector with FLOWCUTTER software.

    Among other anomalies, FLOWCUTTER focuses on clear analysis of outgoing attacks. One of those is a DNS reflection attack passing through the network.

    The administrator was able to find the root cause of an issue – a customer with a public IP address provided to him. At the household, a customer by accident did factory reset of router/modem w/ RouterOS, ending up with no password setup and open service port to the internet.

    Later analysis revealed that the attacker took control of the router and included it in the botnet using it for DNS reflection type of DDoS.

    The operator blocked the attacking IP, getting rid of the attack. Later called the customer for cleaning up.

    Moreover, the network administrator setup daily open ports scan detection in order to be alerted early, when something like this happens again. Next time he will be ready.

    Results

    Let’s walk us through the netflow analysis of the issue at the time of the incident.

    Outgoing attack was clearly visible when filtering out outgoing connections from source port 53. This particular port is reserved for DNS resolving and only resolvers should give DNS responses.

    It’s also important to note that the volume of this attack was just 25.Mb/s which is completely invisible in aggregate traffic of an ISP. Hence other tools weren’t able to detect it, moreover find an attacking vector.

    On the attacking IP, spoofed DNS requests could be observed from foreign countries.

    Drill-down analysis revealed that the anomaly is DNS related.

    In addition to netflow data, periodical scan of open ports was set up in FLOWCUTTER. That helps the operator to be pro-active and expose the anomaly next time. 

    Upcoming feature will among other things allow the detection of outgoing DNS reflection attack detection without any involvement of technical staff of an ISP.



    Resources

    • Netflow analysis in FLOWCUTTER
    • Open ports scan
    • SNMP vs Flow telemetry
    • Outgoing DNS reflection attack detection

    Takeaway

    Opened DNS ports even on a customer’s IP usually lead to reflection DNS attacks pouring from the operator’s network. This can wreak havoc on ISP’s network.

      1. This is one of several root causes that cannot be revealed by analyzing SNMP-like telemetry. 
      2. That’s where netflow data comes in handy. It helps by identifying suspicious traffic patterns in DNS responses.
      3. It’s useful to set up a daily open ports scan to prevent before attack happens.

    ISP resolved the issue with ease. 

    Testimonials

    “It is not a pleasant experience to see our network crumbling but at the same time being blind to why it’s happening.

    FLOWCUTTER helped us confirm the root cause – opened DNS ports and to identify a customer device where it’s happening. Next time we will be “pro active” and find the issues faster before it influences other customers.”

    Frantisek Cihak

    Fiber Network Services

    “One weekend, we experienced degradation of service in periodic interval. We were not able to find root cause from SNMP telemetry. FLOWCUTTER helped us identify reflexive amplification DDoS attack. On monday, my networks worked perfectly again.”

    Lukas Vacek

    Viridium

    When SYN Flood attack hit, ISP decided to be pro-active

    When SYN Flood attack hit, ISP decided to be pro-active

    Title

    Carpet bombing of many ISPs revealed differences between reactive and proactive approaches.

    Situation

    In May 2025, there were a series of DoS (Denial of Service) attacks targeted at hundreds of European ISPs. Attacks were in the form of TCP SYN Flood and targeted the whole IP range of an operator, not just one IP or just one service. Length of the incident varied between 2 and 40 minutes. However it happened repeatedly, several times a day and for a few days in a row.

    During the attack CORE routers were overwhelmed by the number of incoming connection attempts. It resulted in internet outages for the whole operator’s customer base.

    Challange

    Most of the ISPs decided to wait it out until the attack was over hoping their customers won’t complain much.

    Some decided to understand the attack and be prepared for the next time.

    The issue is that common NOC monitoring tools are based on SNMP tools such as Zabbix. Unfortunately these tools don’t contain necessary signals to uncover: 

    • Who is an attacker
    • What method was used to attack

    And consequently the ammunition that is needed for attack mitigation is missing.

    What to do?

      Solution

      This is exactly where netflow data comes in handy. FLOWCUTTER netflow analysis and anomaly detection is a perfect fit for such a problem.

      Customers that exported netflow from their CORE routers to FLOWCUTTER were able to analyse the attack and mitigate it easily.

      Results

       

      See the finding of this particular instance of a DoS attack.

      The operator was able to observe an anomaly in connection attempts (flows per second). It raised 10 times when compared to normal behavior before an attack.


      When operator zoomed in the time interval of the anomaly, he was able to clearly see attack vector. 

       

      It came from AS 202425 headquartered in the Netherlands with 4 IP addresses being responsible for the flood. Also an attack method was revealed: SYN Flood from a few source ports.

      ISP mitigated the next attack either by refusing traffic from this AS or just denied connections from particular IP addresses.

      Note that even though in this case source port information wasn’t necessary for mitigation, if this was a distributed DDoS attack, then source port would be the key to mitigate it using BGP Flowspec.


      Resources

      • Netflow analysis in Grafana
      • County and ASN enrichment
      • Flow-based (D)DoS detection

      Takeaway

      ISPs that chose the pro-active approach were able to quickly and easily determine who was attacking them and how, Consequently the next wave of attacks didn’t influence them at all.

       

      ISPs with a reactive approach had to wait it out.

      Testimonials

      First time this flood attack hit us, it caused short outage. With FLOWCUTTER we were able to identify attacker fast and mitigate it.

      Michael Hendrych

      LMnet

      Detecting an OT network attack through ISP infected routers

      Detecting an OT network attack through ISP infected routers

      Title

      How compromised ISP routers can Reveal attacks on OT Networks

      Situation

      This case study highlights a real-world cybersecurity incident that could occur in contemporary enterprise networks.

      A company experienced network outages and performance degradation, which significantly impacted its Operational Technology (OT) manufacturing network. The attack vector originated from an ISP’s infrastructure, allowing the adversary to progressively move closer to industrial control systems and the organization’s central database.

      Attack Progression

      • Reconnaissance Phase: The attacker employed a “spray and pray” tactic, scanning multiple targets indiscriminately in search of vulnerabilities. Using automated scanning tools, the attacker probed exposed services on public IP ranges. High-interest targets included remote management interfaces (SSH, Winbox, and Telnet), outdated web applications, and network infrastructure devices.

      Picture: MITRE ATT&CK® Matrix: visualization of attack phases



      • ‘’Initial Access: The ISP operated RouterOS-based devices, one of which was compromised due to an outdated firmware vulnerability. The attacker exploited CVE-2022-45315, a vulnerability allowing unauthorized execution of code via specially crafted SNMP packets. The exploit provided a foothold into the ISP’s core network, allowing the attacker to execute commands remotely and establish persistence.

      • Privilege Escalation & Establishing Persistence: Once inside, the attacker elevated privileges by exploiting weak credentials and misconfigured access control rules. They also deployed custom scripts to maintain access even after system reboots.

      • Lateral Movement: The compromised ISP router began brute-force attacks on Telnet, SSH, and Winbox services on other network devices. The attacker attempted to map the ISP’s internal network structure, identifying routers, firewalls, and enterprise edge devices that could be leveraged for further exploitation.

      Picture: MITRE ATT&CK® Matrix: visualization of attack phases



      • Enterprise Network Compromise: The attacker successfully took control of the enterprise’s perimeter router, establishing communication with Command & Control (C2) servers. They used DNS tunneling and encrypted HTTP requests to mask malicious activity and avoid detection by standard firewall monitoring.

      • OT Network Intrusion: The attacker attempted to compromise the perimeter of the OT network, initiating brute-force attacks on the enterprise’s database servers and performing slow, targeted scans against OT network segments, probing for vulnerable** devices**. However, the attack was detected at this stage due to volumetric NetFlow analysis, and mitigation efforts were implemented before a full breach occurred. The intrusion detection systems (IDS) regerated abnormal amount of logs and this triggered alert, allowing network administrators to block the attack and temporarily isolate the attacked network before incident was revealed and attack mitigated.

      Challange

      Challenges cover detection of such incidents

      • Firewall monitoring failed to detect the attack since the adversary leveraged a trusted ISP infrastructure to move laterally.
      • Absence of Endpoint Detection and Response (EDR) across all systems limited forensic reconstruction of the attack sequence.
      • Attack on OT network activity remained nearly undetectable, as it was performed gradually and from trusted devices.

      Solution

      • Deployment of NetFlow analysis at the ISP level helped reveal abnormal traffic patterns.
        • Correlation of multiple data sources is the key for early detection: NetFlow data, Vulnerability scan, Intrusion Detection System (IDS) logs, usage of IP reputation and threat feeds, and DNS telemetry.
          • NetFlow data helped to reconstruct the incident and decide on how to improve security posture.

          • Cleanup mainly consisted of a) better isolation (segmentation) of network devices, and **b) updating ** all infected devices.

          • Monitoring of both ISP and enterprise infrastructure, namely regular vulnerability scans and update status of all key devices.

            Results

            By leveraging advanced network analysis, data correlation, and proactive monitoring, security teams were able to detect the attack before critical OT assets were compromised. This case study underscores the importance of collaboration between ISPs and enterprises in fortifying cybersecurity defenses.

              Resources

              • Netflow analysis in Grafana
              • Anomaly detection
              • Flow enrichment with IP reputation
              • Vulnerability scan
              • Integration of DNS security solution data feed
              • Integration of IDS logs in Grafana

              Takeaway

              Firewall monitoring alone is insufficient—NetFlow data provides deeper insights into traffic flows.

              Proactive security measures at the ISP level can prevent attack propagation.

              Network segmentation and multi-source log correlation enhance detection capabilities.

              Attacks often exploit the weakest link—in this case, vulnerabilities within ISP infrastructure.



              With SNMP only, ISP would lose key customer

              With SNMP only, ISP would lose key customer

              Title

              An anomaly that SNMP monitoring couldn’t spot but flow-based analysis revealed root cause and helped ISP to retain key enterprise customer.

              Situation

              A key enterprise customer called ISP’s technical support complaining about latency issues when using Teams. The network administrator checked the router where the customer is connected together with hundreds of other customers. He analyzed latency data stored in Prometheus.

              Screenshot: latency on 30s intervals on router

              The latency graph revealed a periodicity of an anomaly that took 10 minutes. This repeated every hour.

              As well, packets dropped, and cpu usage revealed a similar trend.

              Challange

              However, based on SNMP telemetry, the administrator wasn’t able to find out the root cause of an issue.

              What to do now?

              Solution

              Network administrator looked into netflow data – traffic telemetry (link).

              • ISP had netflow export in place on all CORE routers
              • Netflow data streams were continually sent to the central collector with FLOWCUTTER software.

              With help of FLOWCUTTER’s ability to easily perform a fast drill-down analysis of flow dataset, the administrator was able to find the root cause of an issue.

              In addition to netflow data, periodical scan of open ports was set up in FLOWCUTTER. That helped to expose the first root cause of the anomaly.

                Results

                On the target router, there was an anomaly – traffic went down while talkers went up.

                  Drill-down analysis revealed that the anomaly is DNS related.

                  After that, the administrator checked the dashboard with results from the open ports scan from the previous night. It showed that another customer with public IP opened the DNS port to the public. That led to additional stress for the router influencing other customers in the same region.

                  There are more examples of what can be revealed within seconds about the customer:

                  • Upload/download
                  • Ports and protocols related to specific services: ftp, telnet, ssh
                  • IP is blacklisted
                  • Communication w/ botnet 
                  • Open ports and vulnerabilities visible from outside

                  Resources

                  • Netflow analysis in Grafana
                  • Open ports scan
                  • SNMP vs Flow telemetry

                  Takeaway

                  1. There are many root causes that cannot be revealed by analyzing SNMP-like telemetry. That’s where netflow data comes in handy. It helps by providing deeper insight into the source and destination of each traffic flow.
                  2. In addition to SNMP and Netflow, it’s useful to correlate with other data sources – in this case open ports scan.

                  ISP resolved the issue with ease. 

                  The second customer, where the root cause dwelled, was called, pointing to misconfiguration. The port was closed, anomalies stopped.

                  For the key enterprise customer, the latency issue was resolved helping ensure a good relationship.

                  Testimonials

                  “One weekend, we experienced degradation of service in periodic interval. We were not able to find root cause from SNMP telemetry. FLOWCUTTER helped us  identify reflexive amplification DDoS attack. On monday, my networks worked perfectly again.”

                  Lukáš Vacek

                  Viridium

                  How infected modem could quietly block /22 prefix

                  How infected modem could quietly block /22 prefix

                  Title

                  Malware in just one customer’s device almost ruined the whole prefix reputation potentially causing problems to all ISP’s customers.

                  Situation

                  Operator provider internet to both enterprise and home customers. Some of the home connections could pay extra for public IP, for example when having a camera security system at home and want to check home safety from the work. One of those home modem/router got infected by malware. Consequently the device was included in the botnet.

                  In the case of this botnet, the goal of the week was to scan devices around the internet for possible open telnet ports, and then try to infect them with the latest load of possible attacks that take advantage of vulnerabilities.

                  Challange

                  Such an attacking device quickly ends up on public blacklists. This influences just the device with 1 IP address. So far so good.

                  What can easily happen later is for the whole prefix (in this case /22) to be backlisted on IP reputation. Potentially peering partners start to challenge the operator of the AS (Autonomous System) and demand correcting the issue.

                  At this point a small anomaly on one modem causes a lot of damage. Amount of work to be done week later is enormous in comparison to correcting issue right at the beginning.

                  So it’s “no brainer”, we have to spot such anomalies, right? 

                  Not so fast. Normally such an anomaly flies under the radar, undetected, if an ISP relies just on SNMP (e.g. Zabbix, Nagios). Administrators usually can’t detect it. Routers aren’t aware of it, as it does not tax hardware or ends up in many bytes and packets travelling around the network.

                  What to do?

                  Solution

                  First of all, an operator should use flow-based traffic analysis, so that he/she can catch this anomaly. 

                  Fortunately, in this case ISP had some measures installed:

                  • ISP had netflow export from perimeter routers 
                  • Netflow was stored in the central collector with FLOWCUTTER software.
                  • With FLOWCUTTER, any admin can easily do a fast drill-down analysis of netflow and other data sources.

                  A quick morning look at the overview (Home dashboard) in FLOWCUTTER with just a few metrics revealed a trend shift in the number of talkers (distinct communication source-destination IP pairs).

                    Fast drill-down analysis revealed that anomaly is situated on one particular IP (home customer with public IP). 

                    This end device is sending millions of Telnet packets (port 23) every few minutes to the whole internet.

                    It took just a few hours to this IP being backlisted on IP reputation lists.

                    Results

                    What if admin don’t want to look at FLOWCUTTER every single day?

                    For that purpose, FLOWCUTTER helps in two ways: 

                    1. to set up “out of the box” detection of various network anomalies – including Telnet,
                    2. Enrich Netflow data by IP reputation, checking and alerting on when any of your IPs is blacklisted.

                      Resources

                      • Netflow analysis in Grafana
                      • SNMP vs Flow telemetry
                      • IP reputation
                      • Flow-based Anomaly detection

                      Takeaway

                      ISP detected a telnet anomaly early, and so was able to prevent cascade of bad outcomes.

                      1. Some misconfigurations and infected endpoints can result in damaging operator’s IP prefix or AS reputation.
                      2. Within flow-based troubleshooting, these anomalies can be spotted and corrected early when they don’t create havoc within the network

                      For the future, ISP used FLOWCUTTER’s ability to monitor and alert on network anomalies as well as regularly check reputation its IP range. And next time be alerted even faster.