What's the difference about using Langchain's Retrieval with .from_llm or defining LLMChain? – Langchain

by
Liam Thompson
information-retrieval langchain large-language-model llama-cpp-python openai-api

Quick Fix: Both .from_llm and defining LLMChain allow you to use Langchain’s retrieval with LLM. .from_llm is a simplified method for users who prefer a pre-configured chain, while defining LLMChain provides finer control over each part of the chain for advanced users.

The Problem:

A developer is confused about the difference between using Langchain’s Retrieval with .from_llm and defining LLMChain. They want to know if there’s a difference in the output, what’s underneath each approach, when to use one or another, and if the first example uses a question_generator or doc_chain.

The Solutions:

\n

Solution 1: Different Approaches of Langchain Retrieval

\n

In the case of .from_llm method, you are working at a higher level of abstraction. This method pre-configures a Conversational Retrieval Chain with some predefined settings, based on the provided LLM model. It automates the process of setting up the question generator and the doc chain, making it simpler to use. This approach is particularly beneficial for users who prioritize simplicity and minimal error.

On the other hand, the no method approach with LLMChain provides more granularity and control. You have the flexibility to explicitly define each component of the chain, including the LLM model, the question generator, and the doc chain, and then assemble them together. This approach enables you to customize each part as per your specific requirements. It is more suitable for advanced users who desire precise control over the chain’s functionalities and behavior, often at the cost of increased complexity.

Solution 2: Structure of RetrievalChain

The .from_llm method in ConversationalRetrievalChain provides a convenient way to load a chain from an LLM (Large Language Model) and a retriever. It creates the question_generator chain and the combine_docs_chain automatically.

In contrast, the second example explicitly defines the LLM (llm) and the question_generator prompt separately. It then loads the combine_docs_chain using load_qa_with_sources_chain.

The main difference is that .from_llm simplifies the process of creating a ConversationalRetrievalChain by handling the creation of the question_generator and combine_docs_chain internally.

When to use one or the other:

  1. Use .from_llm when you want a quick and easy way to create a ConversationalRetrievalChain without having to worry about creating the question_generator and combine_docs_chain separately.
  2. Use the second example when you want more control over the individual components of the ConversationalRetrievalChain, such as specifying a custom LLM or question_generator prompt.

Is the output the same?

Yes, the output of both methods is the same. They both return a ConversationalRetrievalChain object that can be used to retrieve and answer questions.

Q&A

What does .from_llm method do under the hood?

It creates question_generator and combine_docs_chain.

What is the purpose of combine_docs_chain?

To combine relevant documents fetched by the retriever.

What does question_generator do?

Generates questions from a given context.

Video Explanation:

The following video, titled "P!nk - So What (Official Video) - YouTube", provides additional insights and in-depth exploration related to the topics discussed in this post.

Play video

P!NK's new album 'TRUSTFALL' is out now! Listen here: https://pink.lnk.to/TRUSTFALL P!nk's official music video for 'So What'.