The Solutions:
Solution 1: Use a Custom Callback Handler
To log the model calls and answers into a variable using LangChain Callbacks, you can create a custom Callback Handler.
from typing import Any, Dict
from langchain import PromptTemplate
from langchain.callbacks.base import BaseCallbackHandler
from langchain.chains import LLMChain
from langchain.llms import OpenAI
llm = OpenAI()
prompt = PromptTemplate.from_template("1 + {number} = ")
handler = MyCustomHandler()
chain = LLMChain(llm=llm, prompt=prompt, callbacks=[handler])
chain.run(number=2)
class MyCustomHandler(BaseCallbackHandler):
def on_text(self, text: str, **kwargs: Any) -> Any:
print(f"Text: {text}")
self.log = text
def on_chain_start(
self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any
) -> Any:
"""Run when chain starts running."""
print("Chain started running")
This custom handler logs the model’s text output and stores it in the handler.log
variable.
Q&A
How can I remove the special characters/codes that appear in the variable logging the output of a LangChain LLM call?
LangChain adds ANSI color codes to the ‘formatted_prompt’, these are not passed to the LLM. Consider logging the raw ‘prompt’ text instead.
Can callbacks be used to log the model’s outputs?
Yes, callbacks can be used to log model outputs. Custom callback handlers can be created to handle text and other events.
What is the purpose of the on_chain_start
method in the custom callback handler?
The on_chain_start
method is called when the chain begins running. It can be used to perform tasks such as initializing variables or printing messages.
Video Explanation:
The following video, titled "Create a PDF Search and Summarization Tool in less than 100 ...", provides additional insights and in-depth exploration related to the topics discussed in this post.
In this tutorial, you'll learn how to build a powerful semantic search application using Streamlit and GPT-Index (Llama Index).
The following video, titled "Create a PDF Search and Summarization Tool in less than 100 ...", provides additional insights and in-depth exploration related to the topics discussed in this post.
In this tutorial, you'll learn how to build a powerful semantic search application using Streamlit and GPT-Index (Llama Index).