One of the highest profile organisational crises in recent times was last week’s disastrous failure of the Australian online census. The Bureau of Statistics website failed on census night, leaving millions of angry and frustrated citizens unable to submit their census information for almost 40 hours.
There are plenty of theories about what happened and who was responsible, but it’s still far too early to make a definitive assessment of exactly what went wrong. We should leave that to the slew of different inquiries and investigations which were rapidly announced.
However, there are already some important broad lessons for crisis managers everywhere about preparedness and prevention. An angry Prime Minister Malcolm Turnbull stated the blindingly obvious when he thundered: “Denial of service attacks are absolutely predictable.”
Of course they are, Mr Turnbull. In fact there could have been no more predictable crisis risk for the online census than a system melt-down. Yet the headlines are full of organisations which ignore the predictable and pay the price. What becomes obvious is that they often had no real plan for how to respond after those predictable crises struck.
Crisis management falls into two distinct categories of action – resistance and resilience. Resistance is the effort you make to try to prevent crises happening in the first place. Resilience is the steps you take to minimise the damage from a crisis, and to protect reputation. Organisations should be committing resources to both.
For the Bureau of Statistics, the minute the system failed, nothing they did or said could have prevented it being a major national crisis. The only uncertainty at that stage was how damaging the crisis would be and how long it would last. From the evidence so far it would seem that the planning and load testing was all about technical issues and not enough about how to explain a failure and how to minimise the fallout.
Effective crisis management demands a full communication contingency plan for the most likely and the worst case crises. A system failure is never “just an IT problem” and the Bureau’s apparent focus on system integrity for the census evidently left them dangerously vulnerable and unprepared when it came to communicating the nightmare scenario. The crucial question here is not what could go wrong and how can we prevent it, but what is our communication plan when it does go wrong?
Moreover, nightmare IT scenarios are certainly nothing new in major government agencies. Think no further than last December when China was blamed for a massive hack attack on the Bureau of Meteorology, which houses one of Australia’s largest super-computers. The predictable crisis risk was well captured by a headline in The Australian: “The hacking of the Bureau of Meteorology shows the vulnerability of all agencies.”
Yes, all agencies. Which brings us back to last week’s census debacle, where the political game began with the Opposition calling for ministerial resignations, and the Prime Minister warning that heads will roll. They were setting the scene for one of the brutal realities of post-crisis management. Investigations and commissions of inquiry are seldom primarily to find out what happened. Their real purpose is to apportion blame, and history shows that “poor communication” is a popular scapegoat.
The Bureau of Statistics didn’t just need load testing for the website. It needed load testing for the crisis communication contingency plan.