Bowtie Diagrams are Good, Actually
When it comes to safety, security, or public health measures, there’s usually an unspoken debate; whether you trust individuals to do the right thing or not.
You can certainly argue, and back up with data, that teaching airgaps is like teaching abstinence; ineffective because it doesn’t actually happen.
For example, as a “not” approach, take Dale Peterson’s discussion of seatbelts vs. airbags:
Not only does it make sense to eliminate the possibility of an error, it also (being very broad here) is intensely backed up by data.
Security should be likewise: at the top, just don’t have the risk.
Safety Case on a Page, and Bowtie diagrams. A bowtie diagram is a graph (in the mathematical sense), focused on a hazard; this is very like Bochman’s CCE approach. Start with the undesirable event you think could happen. Above the event, you put the hazard; in the case of cybersecurity in ICS, that’s “computer control of $VARIABLE”. On the left, you have threats, or things that could cause the event (i.e. a threat actor or accident causing PLC malfuntion. Honestly, I wouldn’t be surprised if a real event arises as a mixture of both.). On the right, you have responses and residual risk.
And, put bluntly, just like with safety, ask yourself if you can simply do without the source of risk, or at least reduce it (I’d put firewalls here). Then you’d look at Engineering Controls (IDS/IPS), Administrative Controls (training and passwords) and PPE (probably doesn’t apply).