R2A recently presented a free webinar; Why Hazops fails the SFAIRP test. It is one of the more frequently asked questions we receive as Due Diligence Engineers.
Hazops are a commonly used risk management technique, especially in the process industries. In some ways the name has become generic; in the sense that many use it as a safety sign-off review process prior to freezing the design, a bit like the way the English hoover the floor when they actually mean vacuum the floor.
Traditionally, Hazop (hazard and operability) studies are done by considering a particular element of a plant or process and testing it against a defined list of failures to see what the implications for the system as a whole might be. That is, they are bottom-up in nature and so provide a detailed technical insight into potential safety and operational issues of complex systems. They can certainly produce important results.
However, like many bottom-up techniques they have problems with identifying high-consequence common-cause and common-mode failures. This arises simply because the Hazop process is bottom-up in nature rather than top-down.
A detailed assessment of individual components or sub-systems like Hazops examines how that component or sub-system can fail under normal operating conditions.
Such ‘knock-on’ effects are attempted to be addressed in Hazops by a series of general questions after the detailed review is completed, but it nevertheless remains difficult to use a Hazop to determine credible worst-case scenarios.
This is exacerbated by the use of schematics to functionally describe the plant or equipment being examined. Unless the analysis team has an excellent spatial / geographic understanding of the system being considered, it’s very hard to see what bits of equipment are being simultaneously affected by the blast, fire or toxic cloud.
For a limited time, you can watch the webinar recording of the presentation on Why Hazops cannot demonstrate SFAIRP here.