451 Research:
Informed Decision Making with Proactive Intelligence

By Brynne Henn - July 27, 2020

In this guest post, Matt Aslett, Research Vice President at 451 Research shares his research on the importance of proactive intelligence. To hear more from Matt and Sisu CEO Peter Bailis on the importance of proactive intelligence for faster diagnostics, view our recent webinar on-demand.

During our recent webinar, Sisu CEO Peter Bailis and I discussed the role that proactive intelligence has to play in driving actionable insight and enabling more agile and informed decision making.

Making business decisions based on data is of course nothing new: the importance of data to enterprise planning and informed decision making has long been recognized. 451 Research’s Voice of the Enterprise surveys of data and analytics professionals illustrate that this trend is accelerating as senior executives recognize the importance of data to drive operational efficiencies, innovation, and competitive differentiation. In our latest survey, 82% of respondents told us they expect data to be more important to their organization’s decision making over the next 12 months, up slightly from 80% who said the same a year ago.

Image Source: 451 Research, Voice of the Enterprise: Data & Analytics, Data Platforms, 1H20

While data-driven decision making has been around for millennia, the most advanced enterprises today do not just make decisions based on data; they make data central to the fabric of the business and the informed decision making process. This ethos of being ‘data-driven’ is easier said than done, however. According to 451 Research survey data, the average company is currently utilizing less than half of the data it has under management for analytics projects. While that proportion is expected to rise to almost three-quarters of data under management over the next two years, there are perennial challenges to putting the theory into practice when it comes to becoming more data-driven.

These challenges can be broken down into three key areas: data silos, data volume, and static analysis.

Data silos are roadblocks to informed decision making

As we discussed in our recent webinar, one of the most common challenges to becoming data-driven is the rapid proliferation of disparate data silos. This is a challenge that doesn’t seem to get addressed, no matter how sophisticated an enterprise is in terms of data and analytics projects. Indeed, our survey results indicate that most data-driven companies are more likely to have a higher number of data silos.

The presence of data silos may be because they have more ambitious data management projects, and there is also an argument to suggest that their maturity means that they are more likely to be attempting to manage and process data in a distributed architecture.

As data is increasingly spread across multiple applications and data is stored in multiple data centers/clouds, we are seeing the emergence of more sophisticated distributed data management and processing products and services that enable companies to increasingly analyze data where it resides, rather than relying on traditional approaches of first consolidating it into an enterprise data warehouse.

Too much data, too little time

The second challenge we see is that most data-driven companies also have larger data volumes being used for analytics projects.

The sheer volume of this data presents both advantages and challenges. On one hand, the greater the volume of data being analyzed, and the more varied the sources of data, the greater potential there is to identify new trends and anomalies that prompt actionable insight.

On the other hand, as volume and variety of data increases the greater the danger that business decision-makers, faced with a myriad of different data sources and a variety of reports and dashboards, can feel overwhelmed and default to gut feel or data filtering approaches that introduce bias. Machine learning has the potential to overcome this challenge by automating the initial processing of large volumes of disparate data to identify the most pertinent trends and performance indicators for decision-makers.

Mind the gap: Answering questions with changing data

As teams look to understand this data in real time, they have traditionally relied on reports and dashboards created by data analysts in response to requests from business decision-makers.

From the perspective of senior decision-makers, traditional business intelligence workflows are passive in response to changing business environments. The problem is the terms of any business intelligence project – the data required and the desired goals – have been defined in advance. This process results in dashboards and reports that are tailored to specific business conditions and requirements and creates inherent delays in decision-making when it comes to responding to fast-changing requirements.
Unanticipated events require decision-makers to go looking for data and may also necessitate the requisitioning of data, as well as the associated infrastructure and analysts to process it, before decision-makers are able to seek the answers to any new questions they may have.

As we discussed in our webinar and the recent research brief, closing the gap between data and informed decision making is the primary role of proactive intelligence. Specifically, it is designed to complement the ongoing creation of dashboards and reports by data analysts for long-term tracking and planning with the automated processing of operational data to identify changes as they happen and alert decision-makers about the changes, as well as the causes and implications, to ensure that business decision-makers have the information they need to make proactive decisions as business requirements and opportunities change.

The role of proactive intelligence

Rather than requiring decision-makers to seek out new data or commission new analytics projects as conditions evolve, proactive intelligence is designed to monitor operational data in real-time, diagnose changes in business metrics, alert decision-makers and recommend appropriate actions.

As indicated above, machine learning is key to delivering proactive intelligence: enabling enterprises to continuously monitor large volumes of new and historical operational data as it is generated to identify changes in business metrics, and – perhaps more importantly – to test hypotheses on the likely root cause and implications of those changes.

With machine learning, enterprises can process and analyze both a higher volume of data, and a wider selection of data sources than would be possible using traditional business intelligence processes. It also avoids the limitations inherent in selecting data sources and defining the scope of a query to align with requirements that have been predetermined.

Proactive intelligence uses machine learning processes to provide decision-makers with information about not just what has happened, but why it has happened, and also how, they should respond. This information can be presented to decision-makers in multiple formats, including automated visualizations, natural language descriptions, and recommended actions.

These techniques are also being used by purveyors of business intelligence products aimed at data analysts. One of the key differences with proactive intelligence is that it is used to provide information directly to decision-makers — via phone, email, or application alerts — that do not require them to break their usual workflow to go searching for insight.

While proactive intelligence therefore augments, rather than replaces, human business decision-making, machine learning has a further role to play in assessing the influence of the automated insight in the resulting decision, as well as monitoring and measuring the impact of the decision on business performance.

This creates a feedback loop that is designed to ensure that proactive intelligence systems are themselves continually improving to accelerate