Three words have dominated our thinking, our policy and our practice in infrastructure decision making over the last 30 years – efficiency, sustainability, risk. This is about to change.
Efficiency
With large, centralised, and expensive mass infrastructure needed to support public services, ‘efficiency’ was a key concern. But technology is making smaller, distributed, infrastructure, not only cheaper – but in today’s climate of cyber terrorist concerns – also much safer. We are now developing new de-centralised means of provision and our new concern is how safe and effective they are.
EFFECTIVENESS is thus becoming more important than efficiency.
Sustainability
For infrastructure ‘sustainability’ has been interpreted as ensuring long lifespans. We have designed for this and over the last 30 years we have developed tools and management techniques that enable us to manage for asset longevity. This, too, is now changing. With the shift from an ‘asset’ to a ‘service’ focus, functionality and capability have become more important determinants of action than asset condition. This shift is fortunate for it is needed if we are to address the many changes we are now facing – technological, environmental and demographic. At first we thought to ensure sustainability by building in greater flexibility. But change is now too rapid and too unpredictable to make mere flexibility a viable and cost effective strategy.
ADAPTABILITY is the new word. We need to design for, be on the lookout for, and manage for, constant change.
Risk
Risk has been the bedrock tool that we have used in the past to minimise the probability of both cost blow-outs (affecting efficiency) and asset failure (affecting sustainability). Risk management has been a vital and valuable tool. However ‘risk analysis’ implies we know the probability of future possibilities. Today we have to reckon with the truth that the ‘facts’ we used to have faith in, may no longer apply.
UNCERTAINTY is more than risk and requires different tools and thinking.
Effectiveness, Adaptability and Uncertainty.
With these new words comes a requirement for new measures, new tools, new thinking, new management techniques. Life cycle cost models were valuable in helping us achieve efficiency and sustainability and apply our risk analysis. What models, what tools, what measures will help us achieve Effectiveness and Adaptability and cope with Uncertainty?
What new questions do we now need to ask?
For years we have asked ‘what infrastructure should we build?’ Perhaps now we need to ask “What infrastructure should we NOT build?”.
In the last post
I suggested that, when it came to new ideas in IDM (Infrastructure Decision Making), the usual sources of new ideas, namely academic research, think tanks and in-house research were limited in their ability to provide workable solutions.
If academics have an incentive to continue research rather than provide answers, who has an incentive to provide answers? If Think Tanks have an incentive to contain their solutions to the subset of possibilities supported by their funders, who has an incentive to look more widely? And if in-house research provides depth but is limited in scaleability, where can we look for answers that can apply over a wider field?
A Suggestion:
I am going to suggest that, curiously enough, it may be the often maligned consulting industry that is, today, most capable of producing the more interesting research outputs when it comes to infrastructure decision making.
There is a growing number of consulting companies that use their in-depth access to many organisations to test and develop ideas in their search for practical applications.
- They have the incentives: increased reputation and customer satisfaction lead to sustained and increased profit.
- They have access to some of the brightest of today’s asset managers, many of whom have, or are in the process, of developing post graduate research theses.
- They have detailed access to information from a large variety of organisational clients.
These are the elite consulting companies. They don’t have to be large, although access to a large client base helps. Small or medium sized consulting companies may achieve the same level of innovation by developing a sharper focus.
Of course, not all consultants necessarily act in this fashion. There will always be those who choose a ‘cookie cutter’ approach and try to fit your problem into an already conceived solution. But what you will see in the better, more innovative, firms are that they apply themselves to thoroughly developing an idea; they stick with it over the time it takes to bring it to fruition (often years); and they expose their ideas to others (in conference presentations and in papers on their websites).
Nearly always the effort will revolve around one or a few talented and motivated individuals. For this reason, it pays to follow the activities of such individuals in consulting organisations that you are thinking to use. LinkedIn is a good source for this.
Other Suggestions?
Where are new ideas in infrastructure decision making to come from?
Academia?
One might suppose so. Yet how many times do you get to the end of a promising piece of research only to find nothing that can be adopted for practical use, only an indication of potential and a recommendation for further research? Frustrating, yes, but unfortunately this is an academic necessity. In our ‘publish or perish’ world, where one published paper is used to generate the research funding for the next, producing a workable solution that can be adopted by practitioners is an academic ‘dead end’, and nowhere near as academically useful as papers that generate problems for further research.
Think Tanks? Federal Inquiries?
If we cannot look to academia for useable research – that is research ideas that can be applied in practice – where else can we look? There are, of course, think tanks or public inquiries such as the Productivity Commission. These are usually very well funded and employ some of the brightest individuals. However, both the topics and approach chosen will of necessity be determined by the funding organisation or the incumbent government and may not be unbiased.
Public Service and Public Policy White Papers?
Once, excellent research papers were produced by the Public Service. In the 1970s and 1980s, the ability to produce well written and researched ‘white papers’ was a highly prized skill. However politicisation of the Service and the unfortunate elevation of the craft of ‘spin’, have taken their toll. There are still pockets of excellence in the Service, but, with downsizing, there are few instances of good research written to a rigorous standard and subjected to the test of knowledgeable peers.
Asset Owners, Managers, Decision Makers themselves?
What about in-house research by asset owners? This can often produce some very well researched and well written case studies. The difficulty with adopting the ideas produced, however, is that they are heavily dependent on the organisation itself – its prior development, its general culture, leadership and organisational knowledge. Such research produces interesting case studies but presents problems of scaleability.
So what are we left with? How do we progress?
Jeff Roorda continues the story he began with Motorways and Steam Engines (Mar 27)
In part 1 we introduced the difference between physical life and economic life.
Part 2 talks about certainty bias, that is, we pretend to be certain about the future even though this is an irrational and emotional response. The feature photo shows the I-35W Mississippi River bridge collapse (officially known as Bridge 9340). An eight-lane, steel truss arch bridge that carried Interstate 35W across the Saint Anthony Falls of the Mississippi River in Minneapolis, Minnesota, USA collapsed in 2007, killing 13 people and injuring 145. The bridge had been reported structurally deficient on 3 occasions before the failure and ultimately failed because it was structurally deficient. The design in the 1960’s had not anticipated the progressively higher loads compounded by the heavy resurfacing equipment on the bridge at the time of failure. There was no scenario I could find that explained clearly enough for any reasonable non-technical person to understand, the likelihood and consequence of failure and the uncertainty of the safety of the bridge. After the failure, the courts found joint liability between the initial 1960’s designers and subsequent parties involved in managing the assets. The allocation of blame and acceptance of wrongdoing was highly uncertain, but lawsuits were settled in excess of US $61M. The strength and life of the bridge was uncertain and this was reported in technical reports but every person that drove over the bridge and every decision maker with the power to close the bridge had the illusion of certainty that the bridge was safe and would not fail.
As asset managers, our estimates of asset life have critical consequences but are uncertain. We estimate and report how long an asset will last before it fails, is renewed, upgraded or abandoned. This then determines public safety, depreciation, life cycle cost and our future allocation of resources. Whether we decide to use physical or economic life, we still are making a prediction of the future, something we should be uncomfortable about when we really think about it. None of us knows what will happen tomorrow, much less in 10 or more years. We prefer the illusion of certainty, or expressed another way, we have uncertainty avoidance. Uncertainty avoidance comes from our intolerance for uncertainty and ambiguity. Robert Burton, the former chief of neurology at the University of California at San Francisco-Mt. Zion hospital wrote a book in 2008, “On Being Certain”, in which he explored the neuroscience behind the feeling of certainty, or why we are so convinced we’re right even when we’re wrong. There is a growing body of research confirming this phenomenon. So then, what do we do about asset life? We prefer the comfortable feeling of certainty because of the discomfort of ambiguity and uncertainty.
Some years ago, for SAM, Ype Wijnia and Joost Warners, both then with the Essent Electricity Network in Holland, argued asset management was a strange business. Consider, they said:
A typical Asset manager works with an asset base that is very old. For example, at Essent Netwerk the oldest assets in operation are about 100 years old, and the average age of the assets is about 30 years. Each year about 3% of the asset base is either built or replaced. Typical maintenance cycles have a period of about 10 years. So, about 13% of the asset base is touched on a yearly basis.
This means our basic job is more like staying clear of the assets and letting them perform their function than it is like actively doing something with them, as the term Asset Management suggests. Therefore it might be wiser to call ourselves asset non-managers.
The strangeness of Asset Management increases further if you look at the portfolio of asset and network policies. From long experience with managing assets, most policies have reached a high level of sophistication and they address not only the general situation but all kinds of possible exceptions which have been encountered over the period since the policy was put in place. Those exceptions have exceptions of their own, requiring further detailing of the policy.
In the life cycle of a policy, attention therefore drifts from the original problem to managing exceptions. This means that as the sophistication of the policy grows, the knowledge about why the policy was developed in the first place diminishes.
Exaggerating a little bit you could say that Asset Managers do not manage most of the assets, and in case they do, they haven’t got a clue why they are doing what they are doing. You would expect a system that is managed this way to collapse very soon, but somehow it does not, as the electricity grid in Europe has a reliability of about 99.99%.
However, this way of managing assets can only work in a stable environment with stable or at least predictable requirements for the assets. Unfortunately, the world we live in is nothing like stable.
Thoughts?
In our current data driven environment, there is still a role for common sense
A few years ago, using its renewal model, a council was advised that its buildings were 60% overdue for replacement. This came as a great surprise to the Council – but should it have? If true, one would have imagined that there would be many visible signs of major deterioration – non-habitable buildings boarded up for safety, signs of breakdown in buildings still in use (e.g. lifts, plumbing or HVAC not working), union demonstrations, protest movements, etc. If true, this should not have been any surprise to council, their own user experience would have told them that it was true. So what is happening here?
“All models are wrong, but some are useful.” Statistician George Box.
Jeff Roorda, in his post asked ‘why focus on the measurement of physical life using condition instead of looking at function and capacity?’ A very sound question. Function and capacity determine economic life, or useful life (how long we can expect the asset to be of use to us) rather than how long it will physically last. His question is part of a wider range of questions about how we use models.
The first thing to note is that renewal models are financial models. They are based on averages of a group and say nothing about the time to intervene (i.e. replace) for any individual asset. When we say that an asset, or an asset component, has a useful life of 25 years, what is really to be understood is that assets of this type may fail at 15 years or even earlier and perhaps as late as 40 years or more but that when we take them as a whole, their useful lives will average out to about 25 years. This is a guide to financial planning.
Because 25 is an average (and assuming we have a normal distribution and not one that is severely skewed) then we can expect that half of these assets will fail before the age of 25 and half will fail after the age of 25, as shown here in figure 1. Thus we cannot assume that just because an asset is greater than 25 years that it is ‘due for replacement’
This is the mistake made by the council, and why it came as such a surprise to the Councillors.
Our instincts may not be infallible but when intuition clashes with the results of a model, it pays to check both our understanding – and our interpretation of the model.
While we eagerly await Jeff’s next episode, let’s consider the following example of City West Water that realised that ‘strategic’ and ‘operational’ didn’t have to be in opposition. This happened many years ago, but the message is just as relevant today.
When Melbourne Water was broken up into a headworks company and three distribution companies, City West found itself the owner of the central and oldest part of the network. One of the earliest problems it had to deal with was the repair/ replace decision. Clearly it could not afford to replace every ageing asset that was giving it problems.
On-the-ground decisions had to be made taking into account not only the condition of the asset but future rehabilitation programs and a range of other strategic considerations. This meant that maintenance crews found themselves assessing the condition of the asset, determining the problem, but unable to operate until the information had been fed up the line and assessed by the strategic asset managers. This was costly, delayed action and frustrated the maintenance crews.
The Strategic Asset Manager decided that in the ‘need to know’ context, the maintenance crews needed to be aware of the strategic decisions that affected their actions. So he ran a series of sessions in which he explained not only the strategic decisions that top management had come to – but why they had made these decisions.
Discussion was apparently quite lively. He answered all the men’s questions and then went with the men out on site. He asked the crews to assess the situation and then recommend the action required, in the light of top management strategic thinking. Within a short period he found that they were making the decisions that he would have made
– and then he let them run with it.
Comment?
Today’s post is by Jeff Roorda, Technology One, and Deputy Chair of Talking Infrastructure.
Is using condition to determine life a fundamental error in asset registers?
Part 1
How long does an asset last? The answer determines life cycle cost, depreciation and infrastructure planning.
Asset registers estimate useful life for every asset but which life do we use? The physical life or the economic life? Yes these are different, and often materially different. I recently was working in Queenstown NZ with Queenstown-Lakes District Council. After work, I boarded the steamer TSS Earnslaw that had been carrying passengers on lake Wakatipu since 1912, and its steam engine is still powered by coal. The Earnslaw and many steam powered transport assets around the world still have many years of remaining physical life after over 100 years of operation.
So why did we stop using steam for transportation ? Was it because the assets reached their physical life?
In hindsight we all know the answer. Because steam power is dirty and technology made steam power obsolete. OK then, so the economic life was determined by function and capacity, not by physical condition. Even though steam powered assets had many years of physical life remaining, they became obsolete relatively quickly, far more quickly than most people managing the assets predicted. How much of the infrastructure we are building right now will be obsolete well before physical life is reached? So why do we focus on the measurement of physical life using condition instead of function and capacity? The pace of change could be faster now than in the 1940’s and 1950’s when transport shifted from steam to the internal combustion engine which created the infrastructure networks we now manage.
Think about what we are building now and drivers for change.
Infrastructure based on continuing the past where everyone has to travel to the office at the same time creating peak transport loads using internal combustion engines with one person per car does not seem to be a likely future with changes in communications and energy technologies.
Part 2 coming shortly.
Just over a year ago (Jan 6 2017), I wrote about ‘Pop Up’ Prisons, more accurately called ‘rapid build’. Rapid build is used in war zones where there is a sudden and urgent need. It is extremely expensive. Overcrowding of our prisons had led to this now being considered an urgent need, but why had we not foreseen it? One answer had, in fact, been earlier suggested by Mark Neasbey in his post “Infrastructure decisions we make when we don’t think we are making any”(Aug 12 2016) where Mark had brilliantly, and entertainingly, explained how the ‘costless’ decision to put more police on the streets to combat crime had cost consequences which were not only extensive, but, unfortunately, invisible in the eyes of decision makers.
Now, today, comes news from America where incarceration rates have increased 500% over the last few decades. Philadelphia in Pennsylvania is even more extreme, their incarceration rates have increased 700% in the same time. But that is already changing with the appointment a few months ago of a civil rights lawyer, Larry Kranser, as the new District Attorney who is taking extreme action to reduce the cost and human damage involved. The whole encouraging story can be found on the Slate website here but I would like to draw your attention to one move which could effectively be used more widely.
“In a move that may have less impact on the lives of defendants, but is very on-brand for Kranser, prosecutors must now calculate the amount of money a sentence would cost before recommending it to a judge, and argue why the cost is justified. He estimates that it costs $115 a day, or $42,000 a year, to incarcerate one person. So, if a prosecutor seeks a three-year sentence, she must state, on the record, that it would cost taxpayers $126,000 and explain why she thinks this cost is justified. Krasner reminds his attorneys that the cost of one year of unnecessary incarceration “is in the range of the cost of one year’s salary for a beginning teacher, police officer, fire fighter, social worker, Assistant District Attorney, or addiction counselor.”
The same reasoning could be applied to just about any new rule or regulation introduced by government.
Comment?
This is a brief account of a lengthy 2014 dialogue I had with Patrick Whelan, a thoughtful architect in WA. You are now invited to join the discussion.
Patrick: In WA, a scoring model was devised and adopted for all police buildings, It scored building fabric condition and the condition of services to determine an overall condition score. The building’s suitability for purpose was then determined by comparing scores for compliance with the Building Code of Australia and for compliance with the Police Building Code (the agency’s accommodation standards) Comparing the Condition score with the Suitability score enabled a ‘works priority score’ for the station, in effect displaying a level of service offered by each station.
Penny: Whilst building condition and code compliance are important, they are important from the perspective of the building maintenance manager. To get at the idea of service perhaps we need to look at what the building user wants to get out of the building. A building may be in excellent condition and meet all code conditions, yet still fail to work efficiently for the user. It may be that the design is no longer suitable for the new work that needs to be carried out, or it may have the wrong capacity – either too much or too little. It may be in the wrong place!
Patrick: In the case of the WA Police, their Building Code articulates everything needed of a building to progress policing: Planning criteria, technical criteria, functional relationship diagrams, room data sheets, formulae for calculating room sizes and the number of ablutions facilities, guidelines for the compliant design of custodial facilities, etc. The Police, being a paramilitary organisation, have a very strong handle on what is needed to do the job. However, things change over time, and the building code changes with them.
Penny: Building codes are so efficient because, in the short term, people only need to respond, ‘on the dotted line’, as it were: they don’t have to think. However can what makes for short term efficiency lend itself to longer term ineffectiveness? What can be done to determine a code that is both efficient and effective?
Recent Comments