Three roadblocks to 100% automated forms processing

As technology continues to make our lives easier, it can often serve to heighten our awareness of even the most minor inconveniences. If you’ve ever traveled a distance of 2,500 miles in under 6 hours, then gotten annoyed because the line to pick up your rental car took an hour, you know what we mean. Never mind that you just flew—in the air, like a bird—and landed safely on the other side. In that moment of inconvenience at the rental kiosk, all you can think is, “How is this still like this?”

It’s the same thing with automated document processing: optical character recognition (OCR) technology can handle data entry much faster than a human, but at the cost of accuracy. The character error rates of OCR solutions vary widely, anywhere from an average of 2%-10%. With AI, that number can creep up to maybe 95%, which is impressive—but not perfect. That’s also not the whole story.

For every error a document has, a person is required to review and correct it, taking the whole process from automatic to manual. You could have 99% accuracy, but just one missing field on a page prevents 100% automation. This is why, even in 2023, BPOs are reporting they still have to manually process upwards of 20% of all the forms they receive.

We’re not talking about all kinds of documents here, by the way. We’re referring to forms: aka structured data. If you were just throwing random papers into your scanner—magazine pages, handwritten documents, anything—you’d expect the software to have a hard time here and there. But if you’re feeding it forms, one after the other of the same page, with the same fields in the same locations, you’d expect it to be easy. There are predictable patterns and anticipated formatting. To only be able to automatically process 80% of them with zero human intervention shouldn’t be that difficult. And yet it is. Which brings up a very important question:

How is forms processing still not fully automated?

The short answer is: no technology is perfect. OCR, especially when complemented by artificial intelligence and machine learning, is a game changer for data processing. Organizations can rely on tech to ingest more data faster than ever, but with an error rate somewhere between 2% and 10%. An organization that processes a million documents a year with an error rate in the middle of this range (6%) is chasing down 60,000 errors in that year. This is still a vast improvement over what’s possible without OCR, but there are certain challenges inherent to the process:

  • Poor image quality

As they say, “Garbage in, garbage out.” OCR software is fussy about image quality, requiring a crisp, white background with clear black type, well lit, and with proper contrast as an ideal. Some people don’t know the right settings to use with a scanner, feeding low resolution images that an OCR system can struggle to read. Some industries—like healthcare—still rely heavily on fax machines and their low-grade optics. With less than perfect input, the output will follow suit.

  • Variety of formats

OCR thrives on predictability and consistency. In a perfect world, all scanned documents would show up as crystal clear, high-resolution images. To enforce that as a rule would really hamper your flexibility, and would probably cost more time than it’s worth if you have to train your customers to follow the rules. Beyond file formats, though, is the format of the form itself.

AI-powered OCR will be able to recognize which form it’s dealing with based on its structure, but there’s no guarantee on the data that’s added to the form. Something like a Medical Record Number is easier for AI than a name, as it’s always going to be the same format. But then you’ve got to hope it’s been typed in and not handwritten.

  • The value of the data

There’s some data that’s just not as important to get right as others. Did the OCR read “Main Rd” instead of “Main St?” That’s OK; in combination with other address data like ZIP code, the mail will still get there. Did it misspell a customer name? Embarrassing, but not critical. Still there’s some data that has to be 100% accurate—think blood test results, or a police report from a car accident—and there’s just no wiggle room here. Errors are unacceptable, and error correction must be fast enough that it’s caught before the bad data informs a decision somewhere. Even better than lightning fast error correction is not having errors at all.

ScaleHub closes the speed vs accuracy gap for forms processing

Putting OCR and AI together is a game changer in terms of the speed at which you can digitize documents and capture their data. But, like the airplane that flew you from one side of the continent to the other in under 6 hours, you still aren’t at your destination when you land. Error correction is the long line at the rental car kiosk, delaying your arrival to your final destination. You’ll only move as fast as the people who are processing each customer on the line. What you need is some kind of Gold membership, so you can skip the line and hop right into your car to get where you’re going.

In our rental car metaphor, it’s important to point out that the Gold membership that lets you skip the line doesn’t remove humans from the process. It just all happens behind the scenes to give the appearance of being seamless and automatic. With ScaleHub, the road to accurate data is paved with people—a crowd of people, in fact. You just don’t see them. The truth is, AI can do things quickly and at a massive scale, but nothing can compete with human intelligence to find and fix errors in forms processing. Only human intelligence–at large scale–can bring your data that last mile to complete accuracy.

And this is how ScaleHub is making 100% automated forms processing a reality. Our solution relies on both crowdsourcing and microtasking to act as checks and balances against automation errors. The software uses OCR and AI to digitize huge amounts of paper-bound data. It then breaks this data up into discrete chunks—called snippets—and presents them to behind-the-scenes humans to confirm accuracy. These people will compare the small portion of the original scan to the alphanumeric string that the software generated from the image. If they match, the data is accepted; if they don’t, it’s corrected. A byproduct of this process? Data privacy remains intact, as no one person has access to the whole form.

OCR software on its own, and in the best of conditions, handles the tedium of transcription work much faster than a human—but it’s still error prone. Adding AI to the mix certainly improves accuracy, but it still isn’t perfect. The mix of artificial and human intelligence—what we call Collective Intelligence—makes the ScaleHub solution capable of total automation. It’s a solution so smart, only a human could have invented it.

Curious how we do it? Watch the video below to learn more.

YouTube

By loading the video, you agree to YouTube's privacy policy.
Learn more

Load video

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

Sign for regular news from ScaleHub

orem ipsum dolor sit amet, consectetur adipiscing elit.

Recommended Posts

Scroll to Top