Last December, the lights went out in the Ivano‐Frankivsk region of Ukraine, and over 225,000 people were left without electricity for several hours. This blackout was caused by advanced attackers, presumably from Russia, who managed to disconnect 145 substations from the power grid.
E‐ISAC and SANS ICS published a detailed analysis of this attack. The analysis shows significant flaws in the Ukrainian grid's security – among them, the lack of any network scanning systems that could have detected the attackers as they prowled the power grid's networks.
This operation was successful due to the nature of power companies themselves, and could have been detected and blocked. Unfortunately, it wasn’t.
Smart, multi-staged attack
According to the report, the attackers gained network access and then studied the target's routine and weak spots for about six months. They combined several techniques in order to reach their objective and sabotage power distribution. The first stage was network intrusion, when the attackers sent phishing emails to three power companies. The emails contained Microsoft Office files tainted with the BlackEnergy3 malware, which activated after the recipient allowed it to run a Macro protocol to read the file. The attackers used the malware to steal and elevate credentials in order to perform network reconnaissance.
The second stage was ICS entry. The attackers used VPN to enter the ICS network and find the power nodes relevant to their operation. They used existing remote access tools to control the grid's machinery through HMI systems.
The third stage was weapon insertion. The attackers uploaded a SCADA firmware update that changes the ICS network’s serial-to-Ethernet settings.
The final stage was taking down the subsystems. The attackers issued a "scheduled" power outage that stopped power distribution. At the same time, they launched a modified KillDisk tool to erase the targeted networks’ master boot records and logs. To prolong possible response times, these commands were synchronized with a DDoS attack on the power grid's call center.
How could this attack have been stopped?
From a bird's eye view, this entire cyber operation seems legitimate to the power grid's automated system: elevation of local credentials, movement of information between different network elements, and receipt and installation of a firmware update through the regular channels. But when inspected separately, the entire network activity created by the attackers is very suspicious. Why would existing users need higher access authorizations? Who asked for a firmware update and what exactly does this update do? Why is there new VPN activity? How many users should there be inside the grid's ICS network, when are they supposed to be there, and why is one of them moving erratically between nodes? These questions are usually asked once suspicion of a network intrusion has already been raised, but in the Ukraine power grid hack there were no clues of such an intrusion – because there was no defensive system in place to produce these clues.
The initial interaction stage consisted of malware requiring Macro command execution, and used this method to launch itself, without any exploit data within its code. By prohibiting such commands, the network's defenders could have blocked the attackers' chosen vector. This action alone couldn't have stopped the attack as a whole, as the attackers could have used a different approach vector (for example, malware-infected social media messages).
After initial network access was gained, the attackers' reconnaissance could have been detected by network scanning solutions, which look for suspicious behavior. In order to find their ICS entry point, the attackers had to prowl the network for a long time, exfiltrating data required for the next stage in their attack. These actions would have stood out to network scanners, and would have at least raised suspicion of an incoming attack.
Once ICS access was granted to the attackers, the infected firmware update should have been a red flag for the network's defense grid. While communication between the ICS controllers and the HMI would have appeared legitimate, the firmware itself was suspicious. The machinery gets firmware updates from time to time, but since a power grid's production network is very sensitive to system changes, relevant IT teams should have been on high alert and watching for malfunctions days and even weeks before such an update.
Why did this attack succeed?
A power grid's network shouldn’t be too hard to protect; its simplicity creates far less traffic than that found on networks belonging to retail, finance, hi-tech or other companies. The processes involved in power distribution are fixed, and change only if major substation malfunctions accrue. Although these networks are not hard to protect, their operators chose to avoid protection in this case.
Perhaps the simplest explanation as to why the Ukrainian attack succeeded can be found by looking at power grid operators’ tendency to value sustainability over all other network characteristics – including security. Many power companies use a patchwork of systems in their networks – that, while stable, can malfunction when new elements are presented. Therefore, many power companies prefer to avoid any network changes, fearing that technical issues will disrupt power distribution. They prefer taking their chances with cyber attacks over suffering the wrath of paying customers wondering why there is no electricity. Let's hope that power companies learn from the Ukrainian case, and upgrade their defense grid in order to avoid future attacks.