Recently, a malware attack was detected in the Middle East that targeted automated safety systems (click HERE for the story in Wired Magazine). These safety systems are designed to detect and respond to serious system errors in industrial control systems, so that industrial control systems in places like factories or power plants are shut down in a controlled fashion.
Seeing this kind of attack on safety systems should remind us that attackers have few, if any, morals. In other words, they don’t fight fair.
So, our trust in safety systems should be something we look at closely, especially in any computer-controlled environment.
If an industrial system isn’t designed to “fail safe”, it can be very dangerous
Imagine a school crosswalk that relies on a crossing guard to get the children across a busy road safely. If the guard gets abducted, or even worse, becomes unreliable, the children may continue to try to cross the street, or may be directed to cross, when it’s not safe.
So, ideally, it would be good to have somebody that simply watches the crossing guard, and stops children if the guard shows signs of “failure”. We also need to keep in mind that there might be scenarios when something deliberately impacts the safeguard — perhaps the guard becomes a subject of blackmail, or is injected with a psychotic drug (OK, I’m just trying to come up with relevant examples, not necessarily realistic ones).
An outage of a safeguard might be accidental, or it might be deliberate
This is a lot like an industrial control situation, where you have powerful equipment, potentially operating at high speeds. Any malicious shutdown or corruption of safety mechanisms could cause the equipment to go out of control, causing damage and injury to people working in the area.
And if the safeguards can be corrupted to the point where they can be directed to work in a deliberately unsafe manner, there’s a high risk that damage or injury will occur. In the worst case, this situation might be set up so that the actual industrial systems can be corrupted to intentionally build faults into the products they are creating, without being detected by the safeguards. This poses a possible risk to public safety.
How do you keep from going overboard on layers of security?
One of they key principles in information security is that when a system fails, it should also “Fail Secure“. This means that if a system goes down or fails, it should not leave any data or other systems vulnerable to unauthorized access. The best example is one of a typical firewall found in most routers used by small businesses and probably in your home network. If a device that monitors Internet traffic for unsafe situations fails, it should stop all network traffic from passing through.
So, a “fail-safe” or “fail-secure” mechanism should detect if anything is wrong with the safeguards, and immediately shut down access to related systems to prevent damage or injury. But it’s easy to see that this problem could degenerate into a case of “who watches the watchers?” Ideally, we need to assess the risk of the ultimate safeguard failing, in order to decide when “enough is enough” and we can accept the risk. That’s why we tend to only see a single crossing guard, and not a secondary one watching the first one.
Individuals, Employees and Managers:
Think about any computers, devices or software systems in your home or office that are acting as safeguards to your personal information. Then try to determine what would happen if that system failed or became corrupted. You may decide that you need another layer of safeguards to make sure the risks are acceptable.
In a critical business process, a risk assessment can help identify these situations, and recommend additional layers of safeguards to “watch the watchers”.
If you have systems that might be at risk, it’s a good idea to stop and think about them regularly. Please contact me to discuss the risks and what you can do to manage them properly.