THE ROLE AND POTENTIAL OF MACHINE LEARNING IN DISASTER MANAGEMENT

Trellis Data
5 min readOct 13, 2020

--

The Role and Potential of Machine Learning in Disaster Management

Context

We have a once in a generation opportunity to build and deploy a disaster management capability that can change lives. This system is built on a convergence of technologies such as 5G network, sensors, digital platforms, artificial learning, machine learning, and a proliferation of digital platforms. These technologies have changed the way individuals and communities both engage and cooperate as well as the rise of community-centered groups better able to work with government and not for profit groups.

The future could be bright. Together, these developments could transform our capability for disaster early detection, deeper understanding of cause and effect, superior prediction and optimised response.

This article provides a roadmap of machine learning capabilities central to realising this opportunity.

The role of machine learning and Disaster Management

Machine learning is central to disaster management. The data to transform the impact is already with us, we just have to be able to understand it and act on it. At its core, machine learning enables us to learn faster, understand and then act on the proliferation of data we now have access to. Specifically, it is central to:

1. Realising the full potential of the IOT, network and mobile communications investment. These are currently gridlocked with data fragmentation, systems and segregation;

2. Superior analysis, decision support and management of the masses of segregated data;

3. Reducing cycle times and increasing decision making accuracy to meet requirements for quicker, more consistent and logical decision making; and

4. Shifting our business and human endeavour focus from a reactive one to a proactive and predictive capability.

What does good look like?

We believe there are 2 fundamental goals of disaster management:

1. Identification and elimination/reduction of the impact of disasters before they occur; and

2. More efficiently and effectively manage the reduction in impact when then occur.

These goals require capabilities across three domains. First, it is about being better prepared. The data is already available for us to prepare for likely disasters. How certain were we that the bushfires and pandemics would happen? Very. How well were countries prepared? Varied. Second, it is about superior response. Real-time identification of localised incidents and immediate mobilisation based on tailored and appropriate response. Thirdly, and finally, it is about mitigation. It is about understanding what our current data is saying to us regarding when and where incidents will occur, and what we can do to either eliminate or at least reduce the effects before they occur.

Machine learning requirements

Machine learning is only one enabler of enhanced disaster management capability. However, it is critical. It gives us the ability to not only find the needle in the haystack, but also to do something useful with it! At present, machine learning capabilities are dispersed. As a profession that has had to be driven by silo-based technical experts, there are a proliferation of approaches. While this is good for innovation, we are on the cusp of seeing significant consolidation so the enterprise and societal value of machine learning can be realised. From a disaster management perspective, we have identified a range of core requirements that will determine the speed at which machine learning can assist in the transformation of disaster management preparedness, response and mitigation.

  1. Real time explanation of decisions and alerts are mandatory. A common misunderstanding is that real-time explanation is a ‘future’ capability. It isn’t. It is available now and it is central to both providing increased community trust as well as delivering on more advanced machine learning applications such as next best action and situational awareness.
  2. Queryable, event-based sensors that use state of the art machine learning models in real time in low band-width environments are now available. ‘Ad hoc’ querying of the trillions of sensors will provide for far greater and quicker insight and response.
  3. State of the art models. At its grass roots, machine learning field is driven by extraordinarily smart and ambitious people developing models and algorithms that have significant potential. However, because the industry is still relatively siloed and formative, a lot of these are either obscure or it has been hard to surface them for common good. The next evolution is for a more enterprise approach to these models whereby they are rapidly taken from research papers and put into enterprise-wide machine learning platforms.
  4. Sovereign capability. Machine learning, like all artificial intelligence domains, will be central to our future resilience and independence. Australia has been limited in the machine learning models it has been allowed to use. Our critical infrastructure is both exposed to and beholden to other countries proprietary algorithms and models. If our disaster management capability is to be both resilient and dynamic, it needs to be a capability that is sovereign. It needs to be understood, owned and fostered as a capability as important as other core contributors to our success — agriculture, mining and tourism.
  5. At an operational level, machine learning needs to be simpler. It needs to be a capability where non trained users can create, deploy and run machine learning models. Because of its heritage in academia, it has typically been a domain where you needed significant technical understand to code the models. Secondly, the silo-based mode of delivering it has meant that there has been a significant gap between the technology versus the ability to deploy it. In short, great technology doesn’t make a great product. A fundamental requirement is for the commissioning of enterprise-wide machine learning solutions that can span the breadth of potential use cases and be productised so it is easy to use.
  6. Machine learning has evolved from its traditional detection capability base towards real and near real time compilation of data to generate situational awareness and next best action capability. The real power of machine learning will come as we use it to generate meaning and prediction capabilities beyond event detection.
  7. There are a score of technical and infrastructure requirements that will deliver the outcomes we require. Rapid multiple format data ingest and analysis is central. This includes millisecond to minute output of terabyte data input. Multiple sensor and data format analysis for detecting, extracting and alerting objects of interest in highly congested, and sometimes compromised environments, is central. Because of the vast array of technologies, it is critical that machine learning capabilities comprise multiple API’s for legacy system interface and use of other machine learning models. Finally, machine learning models need to be able to be rapidly re-trained and re-deployed on device and on cloud.

The good news is that many of these requirements can already be delivered. The challenge is how we as a community pull this together. It isn’t just about government’s response. It is about how community groups, government, industry and the not for profit sector work collaboratively.

Our next article will address on one of the fundamental requirements for effective collaboration — moving machine learning from a ‘black box’ to a capability that is transparent and where decisions and alerts are known and trusted. The good news is that explainable AI is already here!

--

--

Trellis Data
Trellis Data

Written by Trellis Data

Trellis Data delivers the world’s most sophisticated Machine Learning that can be trusted, in the world’s easiest to use Machine Learning platform.

No responses yet