Get our latest book on "Top 10 artificial intelligence myths."
Black and white fluffy cat sitting on a white chair on a bright pink background.

Feedback loops in machine learning

In spite of the immense benefits that Machine Learning offers, this technology has been very slow to take off, particularly in the enterprise world. At Impira, we believe a key reason for this is the lack of well-designed feedback loops that serve to continuously improve machine learning models.

Subscribe to Impira's blog
Stay up to date with all things Impira, automation, document processing, and industry best practices.

There is a lot of talk about machine learning, but much of it is hype and less is reality. Especially in enterprises. I think a large part of the hype comes from the fact that we see and use ML-powered software every day. Google, Facebook, Netflix, Apple, and Amazon, among many others, have built successful products and features—such as product recommendations, search, and news feeds—that are turbocharged by artificial intelligence.

However, in the enterprise, our chat & email tools, internal search bars, file management tools, ERP systems, etc. are all far behind their consumer counterparts. Of the AI projects that are attempted, studies cite failure rates as high as 85% and blame this on lack of high quality data. At Impira, we've driven numerous successful AI projects for our customers and instead believe there is a different explanation for this high failure rate: the absence of feedback loops.

In consumer products, feedback loops capture how users react to or engage with the output of a ML model. For example, when you search on Google and click a specific result, you are completing a feedback loop that enables Google to measure how well its models ranked the relevant search results. In the enterprise, however, internal software projects have not yet adopted this mindset. Instead, AI/ML solutions are built the traditional "enterprise software way" by gluing together pre-trained 1st and 3rd party software modules that are not designed to facilitate the flow of feedback.

An Analogy

Imagine a well-trained MBA student joins your company. They've learned a lot about the practice of finance, HR, marketing, leadership, etc. in their program and even studied hundreds of case studies across many different companies alongside some of the sharpest minds of their generation. Yet, as we have all observed, their accumulated knowledge does not perfectly translate to the job. There's a lot of learning that happens when you actually try things out.

The same is true for a ML model. These models are essentially complex statistical guessing machines that build assumptions based on the data they observe. While they can be trained ahead-of-time with training data, this data has limited value. Training data is like a collection of case studies, it is an approximation of what might happen in the real world, and it is a useful way to educate a ML model to directionally guess things well. But it's the feedback data that a ML model acquires "on the job" that allows it to refine its academic understanding of a problem to the real-world circumstances in which it's deployed. This data is often what allows a model to improve its accuracy from the often-cited 80% number to the desired 96%+.

A Classic Failed Machine Learning Project: Image Tagging

There are a lot of image tagging tools out there. The premise is amazing — just integrate these APIs into your existing software stack, and suddenly your unstructured data (images, videos, PDFs, etc.) will have rich metadata you would otherwise have to create manually. Unfortunately, there is no way to provide feedback. Worse, these tools are most often used for applications like search, where there is ample feedback generated every time a user clicks on a search result. Imagine being tasked with finding a file about "artists" on behalf of someone in your organization and retrieving back information about musicians, painters, and photographers without knowing what "artist" really referred to. If you get feedback like "No we refer to musicians as artists, but for us the other two categories are not relevant", you might go back and remove that label from the painters and photographers and note this for the future. Unfortunately, this kind of information never makes it back through APIs in the way that they are currently engineered. As a result, these tools demo very well but never actually work well enough to solve the problems they are tasked to complete.

On the left, male musician with saxophone on blue. On the right, female photographer with camera on red.

While an API-based approach may not be the best way to extract relevant metadata from files, there are ways you can reformulate the tagging problem to learn from valuable feedback. Imagine that you are using the metadata to enable better search. You may train your tagging model against training data which contains several reference files and relevant tags, and then use this model to produce a set of initial tags. To complete the loop, you simply need a way of measuring how well these tags relate to certain search queries. Every time someone searches, you can track which tags match the query for any returned results, which model generated those tags, and which results the user clicked on. A click, in this case, is a small piece of feedback indicating that the result was relevant to the query, and correspondingly, that the model did a good job of selecting that tag. This feedback can be driven back into the training process for the model (offline or even online) to improve results. Depending on the use case, this feedback can even be captured for just the individual user, as a way to learn their preferences.

Building a Feedback Mindset

Whether you're developing new software or building on top of existing components, if machine learning is involved, it's important to think carefully about (a) where feedback can be captured and (b) how to incorporate it back into your model. For example, you can capture clicks, prompt users for additional input, and even track usage data from downstream systems. Unfortunately, most of the available ML tools including APIs and frameworks are still built like traditional software designed in a pre-ML era, when feedback was neither captured nor communicated very often.

At Impira, we start every product discussion by analyzing how to design feedback loops into our features. Impira's product allows users to correct models directly through the UI and automatically captures feedback from search to drive improvements. For our customers, this is a unique and invaluable feature that provides high-fidelity when searching for and within images and documents.

Whether you're evaluating software, joining a company, or developing something internally, I encourage you to push for the same. ML projects without feedback loops are not likely to succeed; however, most problems can be reformulated to include one. Often, the process of designing the feedback loop and capturing feedback effectively is the hard work that separates successful ML projects from failed ones.

Try Impira’s text extraction capabilities, today.