Welcome back, cyberwarriors!
In Part 1, we mapped the terrain and saw how nuclear power plants work and how digital safety systems like Teleperm XS are designed and trusted. In Part 2, we move from architecture to timing. Even the most heavily protected system is vulnerable at specific moments, when people, procedures, and technology briefly align in ways they were never meant to. We will explore those moments and show how a theoretical attack could exploit normal plant behavior rather than work against it.
The Window Of Opportunity
Now that we know how, the question is when these attacks are possible. At what moment can an attacker infect the system? There are two main scenarios
Planned Refueling Outage
Every one and a half to two years, the reactor is shut down for several weeks to replace part of the nuclear fuel. It’s a massive event. Up to 2,000 additional workers and contractors from dozens of companies arrive on site. The security perimeter expands and control weakens. In this chaos, it becomes much easier to attack via an insider or a compromised contractor.

Opening an equipment cabinet that is normally under strict control and connecting an infected device to the Service Unit becomes possible amid the surrounding commotion.
Continuous Сonnection
Regulators and operators still debate whether the Service Unit should be permanently connected to the safety network. On one hand, this provides a complete real-time view of system status. On the other, it creates a constant attack vector. At some modern plants (Flamanville 3 in France, Olkiluoto 3 in Finland), the SU is permanently connected. This means the attacker does not even need to wait for a refueling outage. The main challenge is simply finding a way into the local network.
Modeling a Catastrophe. “Cyber Three Mile Island”
The picture comes together. We have a target (the Service Unit), a way to bypass the first line of defense (MAC address spoofing), and an understanding of how to neutralize the main trump card, the physical key, which turned out to be a software imitation.
The Ideal Target
The Small Loss‑of‑Coolant Accident (SLOCA) scenario was chosen deliberately. It is an optimal attack for several reasons. First, something like this has already happened. Three Mile Island showed that even a small leak can lead to core meltdown if safety systems do not operate correctly. It remains the only serious accident involving a modern PWR in history, making it an ideal “template” for a cyberattack.
Second, the pressurizer safety valves are controlled by digital systems, and opening these valves can be done programmatically. Such an attack creates a leak that initially appears to be normal operation of the safety valves.
The third factor is perfect timing. Unlike instantaneous accidents, a SLOCA develops slowly enough for malware to block emergency response systems, yet quickly enough that operators do not have sufficient time to fully assess the situation and take the correct actions. Finally, there is the psychological factor. Operators are accustomed to safety valves actuating. It is a normal protective function. Precious time is lost on diagnosis.

Step 1: Infection
It all begins during a scheduled refueling outage. An attacker gains access to the Service Unit through a compromised contractor or an insider. During maintenance procedures, when an engineer inserts and turns the key to change settings, the malware exploits this “window of trust.” The malware overwrites the firmware on all ALU and APU controllers it can reach. The attack must be comprehensive in order to bypass redundancy. In addition, it may compromise the gateway to manipulate data displayed on the operator consoles. After that, all that remains is to wait.
Step 2: Initiating Event
The malware residing in the ALU controllers issues a command to open two of the three safety valves on the pressurizer. The pressurizer is essentially a large tank located at the top of the reactor that creates a steam “cushion,” maintaining stable pressure in the primary circuit. The valves open on command and become “stuck” in the open position. A coolant leak begins in the form of steam escaping from the primary, most radioactive loop.
Step 3: Blocking the Protection
System pressure starts to drop, and the automation detects it. Under normal conditions, the safety system’s response would be to immediately start emergency water injection to compensate for the leak. But the malware in the ALU blocks this command. It sees the “low pressure” signal, sees the request to start the pumps, and simply ignores it. The pumps remain silent. The operators are deeply alarmed. They see the open valves (this signal is transmitted over a hardware channel and cannot be spoofed) and the pressure dropping catastrophically. They attempt to manually start the high‑pressure pumps. At this point, the PACS priority system comes into play. It is designed so that commands from the protection system (where the malware resides) have higher priority than operator commands. Any attempt to close a valve or start a pump is immediately overridden by the malware.
PCTran Simulation: 49 Minutes to Disaster
This scenario can be run in PCTran, a professional nuclear accident simulator used by regulators and plant operators worldwide. The results are chilling.

+69 seconds: Pressure in the primary loop drops enough to automatically shut down the reactor. Control rods drop into the core, stopping the chain reaction. But the fuel is still extremely hot and continues to produce decay heat. It must be cooled.
+98 seconds: Steam escaping through the open valves ruptures membranes in the relief tank. Radioactive steam and water begin filling the reactor containment.
+520 seconds (8-9 minutes): Falling pressure and high temperature in the primary loop create conditions for boiling in the core. This introduces an additional threat.
Primary loop water contains boric acid, which is a “liquid” neutron absorber. Since boric acid is non-volatile, boiling produces pure water steam, while the boron concentration in the remaining water increases. The pure steam leaves the core and enters the upper part of the loop, the steam generators. There it cools, condenses, and forms pockets of pure, deborated water. These become a time bomb.

+1500 seconds (25 minutes): Accumulation of deborated water in the primary loop reaches critical levels. When this “clean” water returns to the core, it adds positive reactivity.
+3029 seconds (49 minutes): Without emergency cooling, water in the reactor continues to boil away. Finally, the critical moment arrives, where the upper part of the core becomes uncovered. Deprived of cooling, fuel rods begin heating catastrophically, by hundreds of degrees per minute. This marks the beginning of core melt.
By this time, the parallel process of boron dilution turns an already critical situation into a nearly hopeless one.
The operators’ only hope to stop the melt is to inject water into the reactor at any cost. However, the system is now clogged with pockets of pure, deborated water. Attempting to start pumps and restore circulation will, with high probability, inject this clean water into the overheated core. This would trigger a reactivity accident on top of the thermal-hydraulic one with a sharp power spike that could accelerate fuel rod destruction.
Summary of Consequences: The Four Ds
Now let’s see what remains after 49 minutes of simulation. The “Cyber Three Mile Island” scenario leaves behind four destructive outcomes:
Disaster. Although the containment structure would likely hold and prevent a direct release of radiation, the very fact of core meltdown already constitutes a Level 5 accident on the INES scale (“Accident with wider consequences”). For comparison, Chernobyl and Fukushima were Level 7.
Destruction. The core is melted, the reactor destroyed, beyond repair. Losses amount to billions of dollars, and decontamination will take decades.
Disruption. The country’s power grid loses a major generation source. This can lead to rolling blackouts and economic collapse, especially if the attack occurs in winter during peak demand.
Deception. The attack can be accompanied by compromise of radiation monitoring systems, displaying fake spikes on graphs. Combined with a real accident, this would sow panic and distrust in authorities unable to clearly explain what is happening.
Cold Analysis of Vulnerabilities
It is important to understand that this scenario is a theoretical model under ideal conditions for the attacker. In reality, the attack would still require understanding of firmware internals that the researcher never obtained. Moreover, the attacker would need to overcome not only digital but also physical security. Modern reactors include passive safety systems that activate automatically under the laws of physics (for example, high-pressure accumulators that inject water when pressure drops, even without commands), as well as hardware interlocks not controlled by software.
Nevertheless, the mere existence of such a possibility shows the need for continuous improvement of protective measures, even in an industry as heavily defended as nuclear power. The issue is not that Teleperm XS is a bad platform. The issue is that, like any complex system, it is a product of its era and its compromises.
Using a simple CRC32 to verify firmware integrity and authenticity today is archaic, and the story of the “key that is not a key” is a textbook example of how a regulatory requirement implemented formally rather than in spirit turns into security theater.
Future Lessons
Today, giant nuclear plants are giving way to Small Modular Reactors (SMRs). They are cheaper, faster to build, and can be deployed in a wide range of environments. They can power anything from remote towns to supplying energy for data centers. Lessons drawn from studies like this are vital to incorporate into the design of these reactors. The cost of error is extraordinarily high. People like Rubén Santamarta point out weak spots not to spread fear, but to make systems stronger. Our task is to listen to them very carefully. Because the alternative is learning about vulnerabilities not from research papers, but from news headlines.
Summary
Nuclear power plants are among the most complex and tightly controlled systems in existence, yet their very complexity creates subtle vulnerabilities. Moments of maintenance, human interaction, and network connectivity open narrow windows where even the most secure digital systems can be probed or manipulated. Understanding when technology, procedures, and human behavior intersect is essential for designing defenses that are resilient not just to accidents, but to deliberate attempts to exploit trust.
Cybersecurity in industrial control systems is also about anticipating how the system behaves in real operational conditions, how safeguards interact, and how seemingly minor oversights can cascade into major consequences. Learning from these lessons ensures that the next generation of nuclear technology prioritizes both safety and security from the very foundation.
For those looking to deepen their expertise, we offer an intensive course on the hacking and security of Industrial Control Systems (ICS), SCADA, and OT infrastructure. We dissect and analyze some of the most common SCADA/ICS protocols, discuss the process of developing SCADA/ICS zero-day exploits, examine sophisticated attacks, simulate attack scenarios, build honeypots, learn how to assess physical security, and a lot more.
Source: HackersArise
Source Link: https://hackers-arise.com/scada-ics-hacking-and-security-hacking-nuclear-power-plants-part-2/