Measuring failure, and failing

And how the measure speaks to the lack of business acumen in OHS

Measuring failure, and failing
Dave Rebbitt

Metrics, measures, and rates. Health and safety practitioners have long sought to measure an organization's “safety". More recently, they have sought to measure the behaviors or culture.

However, the most favourite measure by far is the measure of failure. Health and safety practitioners delight in measuring the number of times someone got injured, equipment was damaged, a motor vehicle incident, or a spill.

These are all certainly events that would garner someone's attention. As a friend of mine recently pointed out, the outcome of such events is often luck. They dutifully get investigated, which is often done somewhat poorly, as I have spoken about before.

Subscribe to our free newsletter to stay up-to-date with the world of safety.

However, the very measure itself speaks to the lack of business acumen in the health and safety practice. There is a lack of understanding of organizational behavior and business. It isn't for lack of trying. Arguments have raged for years over what we should be measuring, but the Total Recordable Injury Rate reigns supreme.

I often talk about what I call intelligent safety. That really means making sure that you have a system in place and that the system is working as intended. You might note that I don't mention measuring how many times the system fails. All systems fail because they involve fallible people. There is no such thing as zero risk, totally safe behavior, or total safety culture.

If you ask most CEOs what they report to their Board of Directors, they would tell you straight out a report on the EDITDA - Earnings before Deductions, Interest, Tax, Depreciation, and Amortization. For the rest of us, that is profit.

You might notice that the profit would be reported monthly, quarterly, or even annually, much like the TRIR. While EBITDA speaks directly to the company's profitability and viability, it also speaks to the company's success and how well the company is operating.

You might get a simple answer if you asked a CEO why they're not reporting failures. Let's say the company lost the money for 200 days last year. The CEO would not go before the board of directors and say we lost money for 200 of the 365 days last year, but our net profit was $103 million. He would not outline how those 200 days of failure were due to market forces or how it was beyond his control. That would definitely not happen if a company only lost money 15 days of the previous year. CEOs like to focus on the big picture. Individual failures certainly don't merit their attention. The question becomes, why do individual failures seem to fixate health and safety practitioners?

Understanding the failures is a good idea if you have a system that appears to be running well. That is if you decide to learn from the failures in order to make improvements. That would, of course, require a robust learning culture with skilled investigators.

In some cases, CEOs have to report bad news, such as losses, to the shareholders or the board, and they are often under incredible pressure to explain why these losses occurred and why these losses will not occur in the future. That means understanding the failures. Most CEOs would actively avoid such a situation.

Yet, the safety system in any company is a system that is focused almost entirely on failures. Great effort is taken to layout procedures and processes and create a thick binder that weighs enough to be serious.

Some espouse interesting sounding terms like Safety II or Human Organizational Performance. They say that organizations should expect failure and talk about the new way to do safety. Except it isn't the new way to do safety. There really is no one right way to do safety. The key is that safety is about people.

People like to focus on the negative. You only have to watch the news to see that. However, there's a big difference between people who want to watch a news story about someone else and hearing about something that happened in their company that is highly undesirable.

And yet we have an entire safety department focused on delivering bad news to senior management about the company's performance. We assume that the TRIR is in some way measuring the company's performance when it's been well established that this isn't even the statistically significant number.

Some have suggested that we measure the number of things we do with the number of times things go well and concentrate on things that are being done right and things that are working well. Perhaps safety practitioners should try to understand why things are working well, why incidents are not occurring, instead of expending so much energy on trying to understand why they have occurred or trying to make them seem less severe than they actually are.

Incidents are a great learning opportunity because there has been a failure in the system. Incidents have always been symptoms of deeper failures in the system. It is important to get to the bottom of these failures, which is not a common occurrence.

The preoccupation with failure leads us to things like 2.3 out of every 100 employees experienced a recordable injury last year. That sounds not so bad. It certainly doesn't sound as good as 98% of employees did not experience an injury last year. That certainly might be a lot easier to explain than some anomalous events that led to failures.

Does the health and safety program have successes? Are they worth discussing? I would suggest that they are. Your number 1 leading metric is how well the system doing its job!

For example, are people receiving training? Is that training effective? Does anyone even check?

I could go on, but I think you get my drift. Intelligent safety is about putting a system in place and ensuring that system is working to protect employees. Identifying and mitigating the risk to people is not easy.

Measuring the number of failures isn't helping anyone, even when we are failing to measure failures appropriately.