DeBono invented the term ‘Po’ for a provocation – something that begged to be thought about and commented on!
In this light, my friend, Miso, has submitted the following:
“With respect to managing infrastructure, the shortcut to “a more enjoyable trip” is to say that rather than expanding resources on collecting more condition data, let’s “recycle” (actually use for the first time) the mountains of information we have produced through decades of decision making.
By analyzing patterns in financial, engineering, and administrative data we can derive the comprehensive infrastructure performance measure. One which inherently accounts for all factors (e.g. conditional and functional) which were taken into account when the professionals made their decisions.”
The thought itself is quite provocative in this era of big data and condition based maintenance. I like it! I do believe however in best of both worlds.
From my experience, especially within my organisation comprising over 8000 personnel with an average age +50, there’s a lot of tacit knowledge. It’s this knowledge which should be re-used, or used for the first time. The added benefit of this is the tacit becomes explicit. Although this seems a bit abstract, AM tools can be used to do so: the FMECA method being one of them. where implict data about failure modes existing in “the heads of the experts” is analysed.
On the other hand, data analysis has taken a flight, with enablers such as durability and lower costs of sensors helping. I do know some who state that tacit knowledge becomes obsolete and we only need lots of sensors and data analysts. In my opinion, you’ll alway need both. So let’s recycle our data and expand our data collection at the same time.
Thank you Mr. Nagelhout for the thoughts. Both are definitely required, however, I believe only one will conquer the existing challenges of public infrastructure management. After which, the primary role of public administrations will change from “a caretaker” to “an innovator” of infrastructure. Innovation is of course part of today’s business already, but the scale of it will be different. Yes, the approach involves the application of reliability theory to an environment to which it is typically not applied to. At least not at a full corporate scale, horizontally and vertically, for all assets owned. Once it is, the administration will be able to see for the first time future performance (and expenditure) of its infrastructure as one whole (or for individual asset networks) through a unified and dimensionless performance measure (Does this lead us to depreciation of built assets? I am not sure yet.). A measure derived through reliability theory and backed up by (tons of) existing corporate documents. Given that complete network (condition) data collection is typically not feasible, the samples that are collected can then be used for model calibration. Finally, I think the first adapters of the approach will see lower fiscal pressures compared to traditional practitioners, while providing equal or higher performance (levels of service) of infrastructure.