# Part 9 : Following along MIT intro to deep learning

Abhijit Ramesh / April 06, 2021

12 min read––– views

## Introduction

### Document Intelligence

There are a lot of documents around the internet things as invoices, leasers, revenue contracts, proof of delivery and tax forms these documents might be semi-structured and unstructured document intelligence is extracting, analysing and interrogating information from these documents.

### Transaction Intelligence

There are billions of transactions that are happening around the world and all these transactions generate data. We can use these data to determine tax and accounting treatment and identify anomalies and potential fraud.

### Trusted AI

This is the field of AI that deals with Identifying, managing and mitigating the risks associated with AI by collaborating with government, NGOs and industry to enable responsible AI.

## Information Extraction

### Document Intelligence

Document extraction is simply extracting data from document such as these forms here the data is generally entered into the columns that are highlighted and we can make use of these information some of the challenges of these documents is that there might be places like the list that is shown in the document above which may have any number of items that we cannot predict and also there might be cases where there are no information that in the text.

Some of the other challenges that are also there are supporting documents such as cheques.

Here we can see that there is a check along with the invoice as a supporting document and in the invoice also there are list items that may or may not have values in them and they are summed to a total and a check is paid for the corresponding amount.

Some challenges with cheques are that they are generally handwritten and most of the time the data we see might be scanned which might lead to it being poor quality making it challenging to pull data from it.

The document on the left is a very common document that we can generally find, it is scanned and the sides are creased a bit also the information is offset by a little bit so and adding to the challenge. The document on the right also is a very common bill with list items and prices corresponding to it the image is a bit washed out.

The idea here is to discuss about extracting information and what kind of information to look for from these documents such as the receipt given above.

## Types of Information

### Header Items

These are basically fields that have information that occurs once, they are generally called header field because they should appear ideally on the top of the document but because of variability they might even show up on other sides of the document. This information are useful for purposes like taxing if something has been billed managing inventory etc.

These data are generally extracted by first putting bounding boxes around them along with OCR and then building a classier for the information in these bounding boxes to predict what information is what. Some challenges here are things like the receipt total there are values that represent money more than once in the bill which is repeating and the model might tag any of these as receipt total, the way to solve this is by accompanying the machine learning algorithms with some handwritten heuristics that but having these heuristics will actually make the system more brittle and engineers need to be more careful on how we are handling such data. Another challenge would be on vendor address after doing OCR on them and finishing classification the order in which they should occur should be considered as well.

Another challenging session is line items

The next challenge is with list items even though we as humans understand what item corresponds to what it is hard for a machine-learning algorithm to understand this. This is more challenging when the format of the forms are not going to be the same all the time they can come in all forms of shapes and sizes and the order might change a lot so corresponding one object to another is a big challenge here.

Another source for training such models would be some form os system of record like the relational database that is shown above.

## Representing document schema

This is the schema that is generally used to represent such informations it is something similar to json data, the variables in these documents are very self explanitory.

The challenge is to represent raw data as these information and we have such information and raw data to train our model.

## Philosophy of Deep Learning

The classical approach to solving a problem is first to divide the problem into sub-parts and then work on solving each of these parts, for the case of machine learning each of these problems are treated as a learning problem and they should also be solved by finding a data defining the objectives for the problem and then training the model. Then comes integrating all these subparts together which is also a hard problem because they may not work well and there should be hand engineered code to solve each of these problems. The error propagation might be more as the number of parts increases and data is necessary for each of these parts.

A better way to solve this is to follow an end-to-end deep learning approach, hereafter dividing into components each component is treated as a subnetwork. In the model, the networks are pieced together and treated as a single neural network is trained end to end. This does not have to be further integrated as they are already a single neural network and the easy to maintain. We also need only a single source of data.

So how is pre-processing happens for document intelligence ?

The whole problem is treated as a parsing problem.

The documents are first parsed with deep neural networks to form number of parse trees and the from these parse trees the most probable one is taken and read line by line the system of data without any post-processing.

## Context Free Grammars

If we have to construct a parse tree we need a context free grammars to parse against, context free grammars are of the most important topics of computer science and they are the back bone of many programming langagues.

Grammar has a set of rules that can be used to replace the elements that we are reading to construct a parse tree, the grammar shown on the right describes how we can do this replacement in the context of the elements that we are reading line by line.

So here we can see that there might be some forms of ambiguity that can occur for example the machine might think that 3 might also correspond to a Total Amount rather than a price the idea is to resolve these ambiguity by using a deep network, first there are scores that are assigned for each of the grammer.

Then we try to use rules that have high score, every rule will have a deep network corresponding to that rule and it will give the score for that rule.

So how do we apply these rules ?

Since there are models constructed for each of the grammar we can pass this value with an ambiguity to each fo the model to return a score. Over time the model learns that the deep network for total score and the deep network for price will have a higher score.

What kinds of ambiguity can appear ?

The first kind is where there are more than one kind of choices that we can make in this case we can either do CP or go with DP but here the grammar itself corrects this error because there is no right hand side that mentions a possibility of DP appearing.

The next possibility is when these notations are allowed, in this case we score each of them by passing the left hand side of the tree and the right hand side of the tree to the model and generate the score for the same.

So how do we score the whole tree ?

We score the whole tree by using the following notation the $c(T)$ represents the tree, then the score is calculated by using $c(L,0,n)$ here the L represents the root of the tree the other two terms are the range of the notation. We can also use the same notation for a subtree as well.

The score is then defined recursively with the tree that it was made up of.

The goal here is to find the maximum score so we need to rewrite the term in such a way that this is done.

Then we use dynamic programming to find the highest scoring parse tree.

We have now defined a scoring mechanism for parse trees for these line items, we can then choose among all possible parse trees with different scores. One with the highest score is the one we consider the most likely and the one we consider the most likely is the one with the information that we care about.

We will then train the deep network to select the most possible parse trees,

Each of these terms on the right side of the network represents a deep network and they are defined recursively so if we unroll this network we would end up with a big deep network. This large network is built for each and every document that we are trying to parse.

How do we train these networks ?

## Learning objectives and Training

The learning objective here is similar to any kinds of learning objective for a structured prediction our goal is to maximise the amount of parse trees with more score and to minimise the parse trees with less score and this is exactly what is defined by our loss function. After training this network we will be left with an end to end system for parsing these documents.

This is how we would handle the data in 1D but lets us see how we can do this in 2d.

## 2-dimensional parsing

In the left hand side we can see the data which has to be extracted and on the right hand side is the target of what we should get as the result of parsing these data. We do OCR on these data and create scoring based on the network as we have seen before. There are also some extra data shown by the red bounding boxes which are removed to simplify the parsing.

So why should we 2d Parsing ?

If we look at one of the line items that was shown before and try to think of this in a 1d manner we can see that it has lost its continuity. It has lots its representation and the result is a truncated description field for the line item. This is why the need for 2d parsing arises because we can use this to maintain continuity and it also representative of how we humans interpret data.

We start with the tokens and then pass them through the deep network and get labels for that the deep network thinks what each of the tokens is.

For the first combination we can do a horizontal parsing because we have a clearly defined grammar that states what to do when two descriptions arises together.

We could next go with combining with the Total Amount on the right but the problem is that this would leave the other bounding boxes dangling and the result would not be an ideal pass even though the grammar for the same is already mentioned.

We do not have to define what the direction of the next parse is going to be because the deep network would predict what is the most probable next parse.

We can now continue doing vertical parse till the last term.

Finally we can do the horizontal parse for the Q with the total amount and the parse would end as a Line Item.

## Handling noise in the parsing

Earlier we described that the document might have some irrelevant text how do we handle them ?

We handle them by allowing the deep network to classify them as a token called noise. This noise class will also have some grammar defined which can be used to handle these noises.

The first condition is that two noise can be combined to form one noise.

Then we add another rule saying that surroundings a description can be noise which can be treated as a description. The ! indicates that the noise can come before or after the description.

Finally we can combine two Description to get one description hence the irrelevant information is handled.

Finally we end up with right parse tree for the document and we will get the matching information as shown above.

## Experimental Results from EY

EY has tested their experiment with Clover AI where they use recipes as a dataset which is the same case for EY but they require every bounding box of the receipt to be annotated whereas EY rely on the records to get the JSON format which does not have the bounding box coordinates and they are able to achieve comparable results by using a better technique.

## Sources

MIT introtodeeplearning : http://introtodeeplearning.com/

##### Subscribe to the newsletter

Get emails from me about machine learning, tech, statups and more.

- subscribers