Get our latest book on "Top 10 artificial intelligence myths."
Download
Impira's Review Workflow, helping users get through prediction review faster and easier.

Review Workflow: An interview with Amy Le, senior product designer

Amy Le tells us how several collaborative teams at Impira combined their efforts to create the Review Workflow — a new way of reviewing machine learning predictions that’s quicker and less tedious.

Subscribe to Impira's blog
Stay up to date with all things Impira, automation, document processing, and industry best practices.

Meet Amy Le, Impira’s senior product designer. She works with Impira’s product, design, and engineering teams to create features that improve our users’ experience and give them more tools to boost their productivity so they can do more in less time.

In this interview, Amy walks through the collaborative process behind creating Impira’s Review workflow, a new way for users to quickly and easily go through machine learning (ML) predictions while simultaneously helping Impira improve their individually created machine learning models on the fly. Impira is taking a tedious industry-wide process of reviewing ML predictions and turning it into an elegant, no-code experience. 

What does the typical process of reviewing machine learning predictions look like? 

Amy: 

I’m not a machine learning expert, but I have run through a variety of commonly used machine learning solutions that feature a review process that looks more like a never-ending tasklist of things a user has to do. Because of the repetitive nature of this process, many companies will outsource this work to places with cheap labor. It actually reminded me of Mechanical Turk-type services, where you can hire a person on the internet to do tasks like reviewing optical character recognition (OCR) readings and ensuring they match up with what’s on the document itself. 

The companies who need to outsource labor to review predictions tend to have a lot of data since machine learning requires a lot of training. It can be hard to continually obtain good quality results, since it's often remote teams who are reviewing in a silo. Decentralized reviewing of really tedious results creates less opportunity for companies to provide oversight and to maintain quality control. On top of this, any company with a limited budget could have a tough type incentivizing outsourced workers to work exceptionally hard to completely eliminate mistakes.

Those things, combined with the difficulty for any person to stay motivated while doing large amounts of tedious work, made this a less-than-ideal solution overall that can create a lot of opportunity for human error and bad data.

How did you address the problem of “confidence”?

Amy: 

When we started designing a solution for this problem, we saw two different sides to the issue of addressing “confidence.” On one side, we have the confidence of the OCR results, and on the other side we have the confidence of the machine learning (ML) model’s predictions.

When it comes to the machine learning side, as it operates in Impira, a user will correct a prediction and that’ll automatically retrain our machine learning models. This helps the ML models learn and improve, and makes all of the other remaining predictions get better. This happens because the smarter, newly-improved model will take another “look” at low confidence predictions and reprocess them,  making the review process easier for a user as they go.

The interesting thing is that while you’ll definitely see long term improvement in your machine learning model’s performance as you go, it gets tricky when you try to quantify that improvement — because, like with normal human learning, improvement doesn’t grow in a perfectly linear way. This is a cool but tricky situation we had to deal with on that side.

For the OCR side, it’s a bit more straight-forward when it comes to the process of reviewing predictions. If the OCR doesn’t read a bit of text properly, a user will have to go in and correct that. The OCR’s “eyes” can’t really compete with human eyes, and doesn’t actively improve itself, so someone needs to go through each low confidence reading.

We had to ask ourselves, “How do we create an [prediction review] experience that doesn’t feel onerous, and how do we present enough information for the user?” Because when you review stuff, you just want to get it done, right? Our goal was to encourage our users to review these predictions while not forcing them to think too much about the task itself. The entire purpose of Impira is to minimize repetitive tasks so we wanted to create a typically repetitive process much more pleasant. 

Read more about machine learning confidence at Impira.

What were some of your goals and milestones?

Amy: 

The design, engineering, and product teams had a lot of internal back and forth with ideas about how we could rethink this review process. If you look at some of our original design ideas, you’ll see tons of solutions that don’t look remotely like anything that you see in-product today. We tried out ideas like having users verify predictions by working on the document itself, or using a straight down queue, amongst other ideas. 

The first milestone we wanted to meet was to ensure that reviewing predictions was simple and easy for any user, even if a new contractor or team member stepped in to do this task for the first time. They’d know exactly where to go, have all the information they need, and have a focused viewpoint in the product to get things done as easily as possible. 

After we figured out where we wanted this feature to live in our product, we had many debates on how we’d communicate what was happening behind the scenes with the machine learning models: How would we show progress? How would we indicate that models had updated and have reprocessed predictions? How do we earn the user’s trust that Impira's Review Workflow is actually effective and working for them?

We even played around with fun ideas like using an analog battery meter to indicate the progress of training a machine learning model. The goal was to show the perfect amount of information without bogging users down through overcommunication. We also wanted to ensure users knew exactly when they were “done” with this task, instead of being left in the dark about whether they needed to do anything more to finish reviewing predictions. We wanted users to feel satisfied and accomplished.

So, what does the Review Workflow look like now?

Amy:

To enter the Review Workflow (after uploading files and creating fields), a user can simply click a button that indicates exactly how many predictions they should review. 

Upon entering the workflow, users see a percentage at the top of the window that tells them how far along they are in ensuring their whole dataset was fully confident and ready for use, as well as a subtle progress bar that inches along with them as they go.

Every time a user clicks to confirm predictions, bold green bars sweep across each completed row and the data readiness percentage jumps up. And while this is happening, Impira is behind the scenes using this user input to update the machine learning models in real time. This allows Impira to automatically apply new learnings to the rest of the predictions and can end up taking care of multiple rows at a time, which cuts down a user’s total workload in a really significant way.

We found that this experience is really satisfying for users because it clearly communicates that their predictions are confirmed and that Impira is working hard to take care of a laundry list of tedious tasks they don’t really want to do.

All the design elements we have in place — the progress bar, the data-readiness percentage, the sweeping green bars — all serve as barometers that communicate to users that their work is done and they can move on to other things. Punctuating that moment was really important to our team because it would help our users feel relief and accomplishment. 

Though reviewing predictions is a common procedure particular to the machine learning industry, by creating a new way to do it, we’re able to have a longer term, under-the-hood impact. Even if users without data science experience may not be able to spell out explicitly what’s happening with the technology, there’s still a short term understanding that leads to a long term benefit — which is making Impira's ML models more and more useful for any files added in the future. Users are reviewing predictions, but also training machine learning models at the same time. They don't need a ton of training data to accomplish this, nor do they need a lot of data science experts to help them. The training process has become really easily doable by anyone.

How does this project speak to Impira’s values as a company? 

Amy:

Because Impira’s mission is to “make meaningful,” we’re always on the hunt to make features for users that take away annoying, tedious tasks. By finding all the little ways we can relieve users of boring, onerous work, we’re accomplishing our goal to make their time, energy, and work more meaningful. 

We also love collaboration and when we get to put lots of minds together. We got people from so many different teams to make significant contributions — from engineering and product management to brand design and UX writing. That collaborative spirit is important to us and helps us create useful resources for our users. Making a cool feature that works on a technical level is one thing, but bringing in a layer of user attentiveness increases the complexity of any project, so it was important to have lots of eyes and voices contribute to our efforts. 

As long as we’re making our product more fun and easy to use, then I think we’re on the right track.

Check it out for yourself today