Get our latest book on "Top 10 artificial intelligence myths."
Hand holding a yellow polaroid camera on a red background.

Can AI solve your business problem?

Artificial Intelligence promises to solve a whole host of complex business problems, but how can you know for sure that a given problem is ripe for AI?

Subscribe to Impira's blog
Stay up to date with all things Impira, automation, document processing, and industry best practices.

Artificial Intelligence (AI) can be confusing. AI is a broad term that covers a wide range of applications, and while we see AI technologies performing a swath of incredible feats such as Google Translate, AlphaGo, and solving a rubik's cube, it can be hard to tell which business problems AI is apt to solve. Matching a given problem to an AI solution often comes down to framing: sometimes what appears as an intractable problem can, with minor tweaks, become the kind of problem AI excels at solving. 

At Impira, we’ve been fortunate to work through these challenges with customers across a variety of industries, departments, and use cases. As part of that process, we’ve developed an internal framework to help customers evaluate how AI can be used to automate their manual workflows around unstructured data (i.e., images, videos, and scanned documents). Here, we’ll share how we think about that framework and some examples of problems for which AI can make a meaningful difference.

When is AI a useful technique?

We use the following questions to help evaluate if a problem can benefit from AI:

1. Is there one correct answer for each task to be solved? 

2. Could a human perform the task to a sufficient quality?

3. Is there a natural feedback loop and the opportunity for a human to verify the answer when needed?

Is there one correct answer for each task to be solved? 

It may go without saying but AI is, unfortunately, not magic. Most AI applications perform what is called supervised learning. This means that the AI models learn to predict an answer, such as whether or not a certain jacket is present in a photo, given input data provided by humans. The models learn precisely from this type of data, that is, data that has been correctly labeled by humans. If you cannot frame your problem as one in which you are predicting a value with one correct answer from your inputs, then that problem may not be a great fit for AI. This may seem obvious but some problems, such as looking for anomalies in your data, are inherently subjective and thus difficult to solve using AI techniques. 

Another way of thinking about the same question is whether or not you could quantitatively measure success. AI models learn by evaluating predictions against the correct answers. For models to correctly learn, there must be some quantitative metric against which they can improve. You can easily evaluate whether or not a model has correctly predicted the presence of a certain jacket in a set of images, and from there measure an average accuracy. If you cannot evaluate your model with a simple metric like accuracy, that is likely a sign that the problem must be framed differently to solve it using AI techniques.

Could a human do the task?

One good rule of thumb is that if a moderately well-trained human cannot perform the task to the accuracy needed, neither will AI. There are two major reasons for this.

Firstly, as mentioned above, AI techniques learn from correct answers. These correct answers are usually provided by humans. For example, the massive advances in computer vision over the last decade have been enabled by enormous datasets curated by humans. If a task is difficult enough such that two humans, or even the same human at different times, may give a different “correct” answer for a problem, then the data will be inconsistent. In turn, this will cause  the models to learn strange idiosyncrasies or stop learning altogether. Similarly, it will make it difficult to determine when a model has produced a “correct” prediction since two different humans may disagree on the “correct” answer.

Secondly, AI is usually working with the same data that humans are. They may pick up on different statistical cues than a human would, but the underlying data is the same. Therefore, if the information needed to produce the right answer is insufficient for a human, then AI will also be unable to generate an accurate prediction.

female show the back of a blue jean jacket that she is wearing
retro polaroid camera

For example, we are often asked whether our technology can identify the products in a photo. This is sometimes possible based on just the image. In the image with the red backdrop, the brand (Polaroid) and model (One Step Flash) are both visible on the product.

However, some images are much tougher. In the image with the yellow backdrop, there is only a backwards view of a denim shirt, without any brand logo or text. Even an expert in denim shirts or the brand would find it difficult to accurately determine which product is on display.

Is there a feedback loop and human touch?

Many businesses want a one-and-done AI implementation in which they provide the training data, the model learns and is put into production, and the job is done. However, in our view, the lack of a feedback loop is the number one reason that AI projects fail in enterprise settings. In practice, AI applications work best when there is a natural feedback loop for integrating user feedback and corrections into the model.

Not only will an application with a feedback loop learn faster and more efficiently than one-and-done techniques, but it also provides an efficient way to adapt to evolving situations. For example, a retail business may introduce new products or a real estate company that was processing and analyzing one type of document may encounter new formats. An application that emphasizes a human feedback loop will be able to handle this evolution while those that rely on a simple one-step training process will tend to break down.

AI techniques rarely, if ever, produce perfect predictions every single time. In cases where each mistake can be costly, it is critical to have that human step to verify the predictions. This “human-in-the-loop” allows the models to markedly accelerate the tedious parts of a targeted workflow while still allowing for peace of mind. This acceleration occurs on two fronts: (1) a well-designed interface allows a user to verify a prediction in a fraction of the time it would take them to produce the prediction themselves, and (2) each human verification improves the models (and simplifies the workflow) even further, courtesy of the feedback loop.

AI for your business

In our experience these three questions are absolutely essential for laying the foundation of a successful AI implementation. At Impira, we’ve built our AI platform to allow companies, even those without data scientists and machine learning engineers, to solve problems around unstructured data that fit within this framework. Every business problem has its own nuances. The good news is that we can reframe these problems in such a way that makes them suitable for AI application. 

Think we missed anything? As always, we’d love to hear your thoughts. Connect with us at or on and Facebook, Instagram, LinkedIn, and Twitter.

Highlight and extract data with Impira AutoML.