All models are wrong, but some are useful
When Models Fail: What Process Safety Can Learn from LLM Hallucination
The title of this post is taken from a paper published in the year 1976 by the statistician George Box.
The post discusses some of the benefits gained by studying incident reports, such as those published by the Process Safety Beacon. It also discusses some of the features of Large Language Models (LLMs) when it comes to learning from these reports.
Issues covered include:
- Model does not fit the data
- Unidentified ‘Raw Signals’
- Smoothing safety data

