fbpx

Decoding documents: how SmartBots chunk data with LLMs

Decoding documents: how SmartBots chunk data with LLMs 1456 816 Ben McCulloch

Consider this scenario; you’re tasked with creating an LLM that allows people to query your organisation’s documents. You know that RAG has been getting a lot of attention recently, and your team wants to try it too.

So you gather all the useful documents. Do you read every last word in every document? Probably not – that would defeat the purpose of using AI to help you rapidly access the nuggets of information within.

The next step is to chunk them. To be efficient, you need to do it automatically without knowing what’s inside.

That’s quite a challenge! How would you do it?

Jaya Prakash Kommu, Co-Founder & CTO, SmartBots, suggested a great solution to this challenge when he recently appeared on the VUX World podcast.

Every point is unique

Before we look at the different strategies you might have for chunking the data, we should first ask this question – how long does it take to make a point?

You know what we mean. Every good document (as well as other media such as videos and audio) leads you towards conclusions, and they take you there via supporting information.

But every author is different, every media type is different, and every document is produced for a different need.

Even the summaries and conclusions made within the document – such as a table of contents, index, bullet points or a conclusion at the end of a section – might not reference every point in the entire document.

When trying to decide how to divide that data into useful chunks, you could decide to separate each paragraph, page or another arbitrarily chosen length – but why? Some writers get to their point in a paragraph, and some get there in a page. It also depends on the subject being discussed. The menu for a company’s annual staff party is (hopefully) going to be more concise than their whitepapers!

An adaptable approach

Smartbots have an innovative approach to this challenge.

They don’t define a strict chunk-length and apply it to everything. If they did that, the results would be variable, because sometimes a nugget of information might be caught within a chunk, and sometimes it won’t. Sometimes it might get split up across a few different chunks, forcing you to have to search harder and work your LLM more on the other end when summarising. The risk is that only part of a point is caught.

Instead, Smartbots use an automated approach that leads to more accurate results, and it’s actually pretty simple.

Here’s how they do it; they use an LLM to chunk the data. What that means is that the LLM sifts through the documents and defines where each nugget of information starts and ends. It could be short, or it could be long, but the point is that it’s adaptable. That makes sense because every document is different.

Once the LLM has chunked it, they can divide the documents up accordingly, and add them to their knowledge base at the heart of RAG. That way, LLMs have been used at the start and end of the process.

When a user queries those documents, they’re presented with both the LLMs responses and links to the documents where the answers were sourced so the user can fact-check and read more if they want to.

Why are you doing it this way?

LLMs are still relatively new, certainly in the enterprise. They’re exciting, and brilliant, but they also go off-the-rails occasionally. We’re still defining the best practices for using them.

While we might try and race ahead by applying them everywhere we can, we need to ensure we’re using them to their advantage rather than adding new problems into the mix.

We get there by asking ‘why are we doing it this way?’

If you think like that, you can see problems in your process, such as how data is chunked before it goes into a knowledge base. A simple fix can work wonders!

Thanks to Smartbots’ Jaya Prakash Kommu for sharing this. You can watch his VUX World interview here.

    The world's most loved conversational AI event is back
    This is default text for notification bar
    Share via
    Copy link
    Powered by Social Snap