Life After Hackathons

2

Until recently I’d not appreciated the flurry of hackathons that have occurred over the past few weeks, many with the admirable aim to identify and improve measures to combat the effects of COVID-19 or to better enable organisations (public and private) to cope with its fallout.

Hidden Insights - How can we understand more about the spread of infectious disease
How can we understand more about the spread of infectious disease?

Notable public examples include:

At the same time, some commercial organisations are doing much the same thing to address immediate challenges, such as:

  • How to better forecast demand for product lines over the coming months, reflecting the impact of infection rates and self-isolation?
  • Modelling the impact of workforce illness on service provision.
  • Prioritisation of limited call-centre capacity.
  • Resource optimisation.

A great idea is a good start

While I confess (as an architect) hackathons aren’t for me, it got me thinking about what these organisations will do once these events have identified new approaches for solving particular problems.

The gentle point I’m making here is that having a great idea is a good start. But the next challenge is how to implement it in the real world. As Nelson Mandela said, “After climbing a great hill, one only finds that there are many more hills to climb." This is especially true in the deployment of analytics, where our industry is littered with evidence of organisations failing to effectively deploy analytics into the business context.

Of course, you can perform some proportion of analytics in a batch or offline manner. And its results simply provide additional insights you can use to adjust or improve downstream activities. For example, you could prescore individuals, applications or activities for later use. However, my focus here is the remainder – analytics performed closer to the eventual business process. For example, an organization may need a real-time decision to operate a manufacturing line or logistics activity.

Some recommendations

1. Involve SMEs

As soon as possible, secure the active involvement of SMEs who know the process you are seeking to improve. Get them to validate the proposed approach. Typically, they’re an excellent source of input since they understand the limitations of the existing landscape.

2. Expect challenging production data

Don’t underestimate the difficulty of transitioning from using synthetic or sample data to a regime where you are using production data that may change on a daily basis or is updated every few seconds. Beyond the complexities of timeliness, data is often incomplete or arrives late. It can also be full of errors: missing values, values that are out-of-range or values with varying meanings that can vary depending on the source.

A greater challenge is where the data used to build the original analytics isn’t available in the operational environment. This demonstrates the value of having the participation of production support personnel during the build phase, to ensure that you can eventually deploy what you develop.

3. Anticipate impact on decisions

Anticipate the impact that the new analytics that you plan to deploy can have on the existing decision-making process. Imagine you have an existing process that requires a real-time response. Incorporating some additional analytics that takes five minutes to run is clearly impractical. At an early stage, you need to determine the best place to deploy the analytics.

4. Set expectations

Do not expect the data science team authors to run their own analytics; recognise that different teams look after production processes. By their very nature, production support folk are a cautious group, and they will be suspicious if your analytic solution involves new products or requires additional libraries.

Products need to be tested to ensure they don’t interfere with existing software, while libraries – particularly from the open-source domain – may be seen as untested or a potential source for malware or worse. It may be that your new analytics is first deployed into a development environment before it can be promoted to production.

By this point you may be wondering whether the so-called hill that represents the move to operationalisation may actually be a mountain in comparison with the achievement of the original hackathon. Exciting as hackathons are, the reality is that the transition from data science lab to production is significant, though not insurmountable. Just consider that in 2019, mountaineers complained of “traffic jams” on the approach to the summit of Everest.

Share

About Author

Paul Gittins

Business Solutions Manager

A seasoned architect with hybrid business and technical skills, Paul encourages clients to adopt new SAS technologies embracing Hadoop, in-memory processing, high-end RDBMS and Cloud. His goal is to persuade client enterprise architects, technologists and data scientists to extend their use of SAS as an enterprise technology, leveraging partners including Teradata, Cloudera, AWS, Google & Microsoft. He has conducted architecture & best practice reviews and consulting engagements in more than 40 countries and has led multi-disciplinary teams to success in large and complex environments.

2 Comments

Leave A Reply

Back to Top