Chunking is the process of breaking large pieces of data or text into smaller, more manageable segments called “chunks” to enhance processing by AI models, particularly in large language models (LLMs) and semantic retrieval systems.

Key Points:
- Efficiency: Smaller chunks are faster for AI models to process.
- Focus: Smaller chunks help AI focus on specific data, improving accuracy.
- Scalability: Chunking enables handling of large datasets.
Impact on Search Results:
- Large Chunks: May contain multiple ideas, diluting relevance and making it harder for models to retrieve focused results.
- Small Chunks: Provide more focused, precise results, especially for queries related to a single topic.
The right chunk size is crucial for optimizing AI search results—smaller chunks improve precision, while larger ones offer more context but may reduce relevance.
Deeper dive
Reference: 5 Chunking Strategies For RAG – by Avi Chawla
Here’s the typical workflow of a RAG application:

Since the additional document(s) can be pretty large, step 1 also involves chunking, wherein a large document is divided into smaller/manageable pieces.

This step is crucial since it ensures the text fits the input size of the embedding model.
Moreover, it enhances the efficiency and accuracy of the retrieval step, which directly impacts the quality of generated responses.
Here are five chunking strategies for RAG:

Let’s understand them today!
1) Fixed-size chunking
The most intuitive and straightforward way to generate chunks is by splitting the text into uniform segments based on a pre-defined number of characters, words, or tokens.

Since a direct split can disrupt the semantic flow, it is recommended to maintain some overlap between two consecutive chunks (the blue part above).
This is simple to implement. Also, since all chunks are of equal size, it simplifies batch processing.
But there is a big problem. This usually breaks sentences (or ideas) in between. Thus, important information will likely get distributed between chunks.
2) Semantic chunking
The idea is simple.

- Segment the document based on meaningful units like sentences, paragraphs, or thematic sections.
- Next, create embeddings for each segment.
- Let’s say I start with the first segment and its embedding.
- If the first segment’s embedding has a high cosine similarity with that of the second segment, both segments form a chunk.
- This continues until cosine similarity drops significantly.
- The moment it does, we start a new chunk and repeat.
Here’s what the output could look like:

Unlike fixed-size chunks, this maintains the natural flow of language and preserves complete ideas.
Since each chunk is richer, it improves the retrieval accuracy, which, in turn, produces more coherent and relevant responses by the LLM.
A minor problem is that it depends on a threshold to determine if cosine similarity has dropped significantly, which can vary from document to document.
3) Recursive chunking
This is also simple.

First, chunk based on inherent separators like paragraphs, or sections.
Next, split each chunk into smaller chunks if the size exceeds a pre-defined chunk size limit. If, however, the chunk fits the chunk-size limit, no further splitting is done.
Here’s what the output could look like:

As shown above:
- First, we define two chunks (the two paragraphs in purple).
- Next, paragraph 1 is further split into smaller chunks.
Unlike fixed-size chunks, this approach also maintains the natural flow of language and preserves complete ideas.
However, there is some extra overhead in terms of implementation and computational complexity.
4) Document structure-based chunking
This is another intuitive approach.

It utilizes the inherent structure of documents, like headings, sections, or paragraphs, to define chunk boundaries.
This way, it maintains structural integrity by aligning with the document’s logical sections.
Here’s what the output could look like:

That said, this approach assumes that the document has a clear structure, which may not be true.
Also, chunks may vary in length, possibly exceeding model token limits. You can try merging it with recursive splitting.
5) LLM-based chunking

Since every approach has upsides and downsides, why not use the LLM to create chunks?
The LLM can be prompted to generate semantically isolated and meaningful chunks.
Quite evidently, this method will ensure high semantic accuracy since the LLM can understand context and meaning beyond simple heuristics (used in the above four approaches).
The only problem is that it is the most computationally demanding chunking technique of all five techniques discussed here.
Also, since LLMs typically have a limited context window, that is something to be taken care of.
Each technique has its own advantages and trade-offs.
I have observed that semantic chunking works pretty well in many cases, but again, you need to test.
The choice will heavily depend on the nature of your content, the capabilities of the embedding model, computational resources, etc.
« Back to Glossary Index