Any maintenance activity on an infrastructure network is an optimisation wherein invested resources are related to the future condition of the structural quality.
Boundaries are set by budget, nearby traffic usage; and sometimes by the limits of the available time. In general, all our buildings are engaged in a constant process of age-related deterioration in structural quality. In these cases, either the administrator or owner of the building is responsible for repairing and maintaining the infrastructure; ideally in a short space of time to minimise downtime. The process of infrastructure maintenance should be based on the collection over time of data on the building’s condition, with the aim of identifying and anticipating potentially problematic ageing and deterioration.
There are two strategies to analyse the structural integrity of a complex mechanical system such as a road bridge during its lifetime. The bottom-up model is based on a physically simplified resistance model based around two spheres, with ground-based boundaries; which is subjected to load testing in a stochastic process.
The data-driven model, on the other hand, collects a broad range of reactive information from a particular bridge and uses it to extrapolate artificial models for regression in order to predict a behaviour. The latter strategy is considered preferable due to the availability of sensor techniques, better data treatment models and cheap computing power.
It complements the bottom-up approach and provides a deeper understanding of what can sometimes appear, on first view, to be surprising structural behaviours. In technical terms the structure is build up as a logical system – a “fault tree” – consisting of branches which are connected to each other in series or parallel. Each component is considered to be structural member like a bearing or slab. Depending on the characteristic of the structure, the failure of a single component could lead to collapse; or the whole structure may retain the robustness to guard against this event.
The data-based approach is not wholly self-explanatory. Along with data collection, outliers must be identified and full quality checks must be conducted. The accumulated data must then be aggregated into extreme value statistics, showing lifetime distributions of a single fault tree member. Multiple members interact with each other; and this behaviour can be observed to predict the approximate lifetime of a system.
Environmental factors and high levels of usage can reduce member capacity, leaving them with increasing probability of failing to meet the requisite standards on the next examination. This is referred to as statistical hazard rate; and typically becomes increasingly probable over time.
The owner of a bridge, structure or transport network is concerned with when, where and in what amounts investment must be deployed to guarantee continued infrastructure availability. Maintenance scenarios are simulated on computer models, which can produce cost estimates. Structures in public areas are regularly inspected and supervised, resulting in a condition-based rating. The major benefit of this approach is that minor invasive measures may be implemented early enough to restore functionality with minimal downtime.
Considering bottlenecks from repair activity leading to traffic jams and the resultant public costs, optimisation of infrastructure maintenance may lead to completely other strategies. Our work in that field revolves around the collection, preparation and analysis of data; and presenting complex data-driven maintenance solutions in a simplified model. The work begins with field inspection, re-analysis of structures, bridge weigh-in-notion (BWIM) or structural health monitoring (SHM), probabilistic analysis, renewal theory; and simulation jobs.
For more information, please visit http://www.petschacher.at/en/.